1
|
Sambyal D, Sarwar A. Recent developments in cervical cancer diagnosis using deep learning on whole slide images: An Overview of models, techniques, challenges and future directions. Micron 2023; 173:103520. [PMID: 37556898 DOI: 10.1016/j.micron.2023.103520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 07/16/2023] [Accepted: 07/28/2023] [Indexed: 08/11/2023]
Abstract
Integration of whole slide imaging (WSI) and deep learning technology has led to significant improvements in the screening and diagnosis of cervical cancer. WSI enables the examination of all cells on a slide simultaneously and deep learning algorithms can accurately label them as cancerous or non-cancerous. Although many studies have investigated the application of deep learning for diagnosing various diseases, there is a lack of research focusing on the evolution, limitations, and gaps of intelligent algorithms in conjunction with WSI for cervical cancer. This paper provides a comprehensive overview of the state-of-the-art deep learning algorithms used for the timely and precise analysis of cervical WSI images. A total of 115 relevant papers were reviewed, and 37 were selected after screening with specific inclusion and exclusion criteria. Methodological aspects including deep learning techniques, data sources, architectures, and classification techniques employed by the selected studies were analyzed. The review presents the most popular techniques and current trends in deep learning-based cervical classification systems, and categorizes the evolution of the domain based on deep learning techniques, citing an in-depth analysis of various models developed over time. The paper advocates for the implementation of transfer supervised learning when utilizing deep learning models such as ResNet, VGG19, and EfficientNet, and builds a solid foundation for applying relevant techniques in different fields. Although some progress has been made in developing novel models for the diagnosis of cervical cancer, substantial work remains to be done in creating standardized benchmark databases of WSI images for the research community. This paper serves as a comprehensive guide for understanding the fundamental concepts, benefits, and challenges related to various deep learning models on WSI, including their application for cervical system classification. Additionally, it provides valuable insights into future research directions in this area.
Collapse
Affiliation(s)
| | - Abid Sarwar
- Department of CS&IT, University of Jammu, India.
| |
Collapse
|
2
|
Li K, Qian Z, Han Y, Chang EIC, Wei B, Lai M, Liao J, Fan Y, Xu Y. Weakly supervised histopathology image segmentation with self-attention. Med Image Anal 2023; 86:102791. [PMID: 36933385 DOI: 10.1016/j.media.2023.102791] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 01/09/2023] [Accepted: 02/24/2023] [Indexed: 03/13/2023]
Abstract
Accurate segmentation in histopathology images at pixel-level plays a critical role in the digital pathology workflow. The development of weakly supervised methods for histopathology image segmentation liberates pathologists from time-consuming and labor-intensive works, opening up possibilities of further automated quantitative analysis of whole-slide histopathology images. As an effective subgroup of weakly supervised methods, multiple instance learning (MIL) has achieved great success in histopathology images. In this paper, we specially treat pixels as instances so that the histopathology image segmentation task is transformed into an instance prediction task in MIL. However, the lack of relations between instances in MIL limits the further improvement of segmentation performance. Therefore, we propose a novel weakly supervised method called SA-MIL for pixel-level segmentation in histopathology images. SA-MIL introduces a self-attention mechanism into the MIL framework, which captures global correlation among all instances. In addition, we use deep supervision to make the best use of information from limited annotations in the weakly supervised method. Our approach makes up for the shortcoming that instances are independent of each other in MIL by aggregating global contextual information. We demonstrate state-of-the-art results compared to other weakly supervised methods on two histopathology image datasets. It is evident that our approach has generalization ability for the high performance on both tissue and cell histopathology datasets. There is potential in our approach for various applications in medical images.
Collapse
Affiliation(s)
- Kailu Li
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China.
| | - Ziniu Qian
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China.
| | - Yingnan Han
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China.
| | | | | | - Maode Lai
- Department of Pathology, School of Medicine, Zhejiang University, Hangzhou 310027, China.
| | - Jing Liao
- Department of Computer Science, City University of Hong Kong, 999077, Hong Kong SAR, China.
| | - Yubo Fan
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China.
| | - Yan Xu
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China; Microsoft Research, Beijing 100080, China.
| |
Collapse
|
3
|
An efficient lightweight convolutional neural network for industrial surface defect detection. Artif Intell Rev 2023. [DOI: 10.1007/s10462-023-10438-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
4
|
Ramamurthy K, Varikuti AR, Gupta B, Aswani N. A deep learning network for Gleason grading of prostate biopsies using EfficientNet. BIOMED ENG-BIOMED TE 2022; 68:187-198. [PMID: 36332194 DOI: 10.1515/bmt-2022-0201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 10/23/2022] [Indexed: 11/06/2022]
Abstract
Abstract
Objectives
The most crucial part in the diagnosis of cancer is severity grading. Gleason’s score is a widely used grading system for prostate cancer. Manual examination of the microscopic images and grading them is tiresome and consumes a lot of time. Hence to automate the Gleason grading process, a novel deep learning network is proposed in this work.
Methods
In this work, a deep learning network for Gleason grading of prostate cancer is proposed based on EfficientNet architecture. It applies a compound scaling method to balance the dimensions of the underlying network. Also, an additional attention branch is added to EfficientNet-B7 for precise feature weighting.
Result
To the best of our knowledge, this is the first work that integrates an additional attention branch with EfficientNet architecture for Gleason grading. The proposed models were trained using H&E-stained samples from prostate cancer Tissue Microarrays (TMAs) in the Harvard Dataverse dataset.
Conclusions
The proposed network was able to outperform the existing methods and it achieved an Kappa score of 0.5775.
Collapse
Affiliation(s)
- Karthik Ramamurthy
- Centre for Cyber Physical Systems, School of Electronics Engineering, Vellore Institute of Technology , Chennai , India
| | - Abinash Reddy Varikuti
- School of Computer Science Engineering, Vellore Institute of Technology , Chennai , India
| | - Bhavya Gupta
- School of Computer Science Engineering, Vellore Institute of Technology , Chennai , India
| | - Nehal Aswani
- School of Electronics Engineering, Vellore Institute of Technology , Chennai , India
| |
Collapse
|
5
|
Role of Anterior Segment-Optical Coherence Tomography Angiography in Acute Ocular Burns. Diagnostics (Basel) 2022; 12:diagnostics12030607. [PMID: 35328160 PMCID: PMC8947509 DOI: 10.3390/diagnostics12030607] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 02/09/2022] [Accepted: 02/21/2022] [Indexed: 02/01/2023] Open
Abstract
Acute ocular burns have varied manifestations which require prompt diagnosis and management to prevent chronic sequelae. Of these, the detection of limbal ischemia poses a challenge because of the subjective nature of its clinical signs. Anterior segment optical coherence tomography angiography (AS-OCTA) offers an objective method of assessing ischemia in these eyes. This review provides an overview of the technology of AS-OCTA and its applications in acute burns. AS-OCTA generates images by isolating the movement of erythrocytes within blood vessels from sequentially obtained b-scans. Limbal ischemia manifests in these scans as absent vasculature and the extent of ischemia can be quantified using different vessel-related parameters. Of these, the density of vessels is most commonly used and correlates with the severity of the injury. Incorporation of the degree of ischemia in the classification of acute burns has been attempted in animal studies and its extension to human trials may provide an added dimension in determining the final prognosis of these eyes. Thus, AS-OCTA is a promising device that can objectively evaluate limbal ischemia. This will facilitate the identification of patients who will benefit from revascularization therapies and stem cell transplants in acute and chronic ocular burns, respectively.
Collapse
|
6
|
Cardinell JL, Ramjist JM, Chen C, Shi W, Nguyen NQ, Yeretsian T, Choi M, Chen D, Clark DS, Curtis A, Kim H, Faughnan ME, Yang VXD. Quantification metrics for telangiectasia using optical coherence tomography. Sci Rep 2022; 12:1805. [PMID: 35110554 PMCID: PMC8810896 DOI: 10.1038/s41598-022-05272-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 02/17/2021] [Indexed: 12/02/2022] Open
Abstract
Hereditary hemorrhagic telangiectasia (HHT) is an autosomal dominant disorder that causes vascular malformations throughout the body. The most prevalent and accessible of these lesions are found throughout the skin and mucosa, and often rupture causing bleeding and anemia. A recent increase in potential HHT treatments have created a demand for quantitative metrics that can objectively measure the efficacy of new and developing treatments. We employ optical coherence tomography (OCT)—a high resolution, non-invasive imaging modality in a novel pipeline to image and quantitatively characterize dermal HHT lesion behavior over time or throughout the course of treatment. This study is aimed at detecting detailed morphological changes of dermal HHT lesions to understand the underlying dynamic processes of the disease. We present refined metrics tailored for HHT, developed from a pilot study using 3 HHT patients and 6 lesions over the course of multiple imaging dates, totalling to 26 lesion images. Preliminary results from these lesions are presented in this paper alongside representative OCT images. This study provides a new objective method to analyse and understand HHT lesions using a minimally invasive, accessible, cost-effective, and efficient imaging modality with quantitative metrics describing morphology and blood flow.
Collapse
Affiliation(s)
- Jillian L Cardinell
- Deparment of Electrical, Computer, and Biomedical Engineering, Ryerson University, Toronto, ON, Canada.
| | - Joel M Ramjist
- Deparment of Electrical, Computer, and Biomedical Engineering, Ryerson University, Toronto, ON, Canada
| | - Chaoliang Chen
- Deparment of Electrical, Computer, and Biomedical Engineering, Ryerson University, Toronto, ON, Canada.,Department of Optical Engineering, Nanjing University of Science and Technology, Nanjing, Jiangsu, China
| | - Weisong Shi
- Deparment of Electrical, Computer, and Biomedical Engineering, Ryerson University, Toronto, ON, Canada.,Department of Optical Engineering, Nanjing University of Science and Technology, Nanjing, Jiangsu, China
| | - Nhu Q Nguyen
- Deparment of Electrical, Computer, and Biomedical Engineering, Ryerson University, Toronto, ON, Canada
| | - Tiffany Yeretsian
- Physical Sciences Platform, Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, Toronto, ON, Canada
| | - Matthew Choi
- Physical Sciences Platform, Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, Toronto, ON, Canada
| | - David Chen
- Physical Sciences Platform, Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, Toronto, ON, Canada
| | - Dewi S Clark
- Toronto HHT Centre, Division of Respirology, Department of Medicine, St. Michael's Hospital, University of Toronto, Toronto, ON, Canada
| | - Anne Curtis
- Division of Dermatology, University of Toronto, Toronto, ON, Canada
| | - Helen Kim
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Marie E Faughnan
- Li Ka Shing Knowledge Institute, St. Michael's Hospital, Toronto, ON, Canada
| | - Victor X D Yang
- Deparment of Electrical, Computer, and Biomedical Engineering, Ryerson University, Toronto, ON, Canada.,Department of Optical Engineering, Nanjing University of Science and Technology, Nanjing, Jiangsu, China.,Department of Surgery, Division of Neurosurgery, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, ON, Canada
| | | |
Collapse
|
7
|
Ghalati MK, Nunes A, Ferreira H, Serranho P, Bernardes R. Texture Analysis and its Applications in Biomedical Imaging: A Survey. IEEE Rev Biomed Eng 2021; 15:222-246. [PMID: 34570709 DOI: 10.1109/rbme.2021.3115703] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Texture analysis describes a variety of image analysis techniques that quantify the variation in intensity and pattern. This paper provides an overview of several texture analysis approaches addressing the rationale supporting them, their advantages, drawbacks, and applications. This surveys emphasis is in collecting and categorising over five decades of active research on texture analysis. Brief descriptions of different approaches are presented along with application examples. From a broad range of texture analysis applications, this surveys final focus is on biomedical image analysis. An up-to-date list of biological tissues and organs in which disorders produce texture changes that may be used to spot disease onset and progression is provided. Finally, the role of texture analysis methods as biomarkers of disease is summarised.
Collapse
|
8
|
Tang H, Mao L, Zeng S, Deng S, Ai Z. Discriminative dictionary learning algorithm with pairwise local constraints for histopathological image classification. Med Biol Eng Comput 2021; 59:153-164. [PMID: 33386592 DOI: 10.1007/s11517-020-02281-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Accepted: 10/22/2020] [Indexed: 10/22/2022]
Abstract
Histopathological image contains rich pathological information that is valued for the aided diagnosis of many diseases such as cancer. An important issue in histopathological image classification is how to learn a high-quality discriminative dictionary due to diverse tissue pattern, a variety of texture, and different morphologies structure. In this paper, we propose a discriminative dictionary learning algorithm with pairwise local constraints (PLCDDL) for histopathological image classification. Inspired by the one-to-one mapping between dictionary atom and profile, we learn a pair of discriminative graph Laplacian matrices that are less sensitive to noise or outliers to capture the locality and discriminating information of data manifold by utilizing the local geometry information of category-specific dictionaries rather than input data. Furthermore, graph-based pairwise local constraints are designed and incorporated into the original dictionary learning model to effectively encode the locality consistency with intra-class samples and the locality inconsistency with inter-class samples. Specifically, we learn the discriminative localities for representations by jointly optimizing both the intra-class locality and inter-class locality, which can significantly improve the discriminability and robustness of dictionary. Extensive experiments on the challenging datasets verify that the proposed PLCDDL algorithm can achieve a better classification accuracy and powerful robustness compared with the state-of-the-art dictionary learning methods. Graphical abstract The proposed PLCDDL algorithm. 1) A pair of graph Laplacian matrices are first learned based on the class-specific dictionaries. 2) Graph-based pairwise local constraints are designed to transfer the locality for coding coefficients. 3) Class-specific dictionaries can be further updated.
Collapse
Affiliation(s)
- Hongzhong Tang
- Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, Hengyang, People's Republic of China. .,College of Automation and Electronic Information, Xiangtan University, Xiangtan, Hunan, People's Republic of China. .,Key Laboratory of Intelligent Computing & Information Processing of Ministry of Education, Xiangtan University, Xiangtan, Hunan, People's Republic of China.
| | - Lizhen Mao
- Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, Hengyang, People's Republic of China
| | - Shuying Zeng
- Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, Hengyang, People's Republic of China
| | - Shijun Deng
- Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, Hengyang, People's Republic of China.,College of Automation and Electronic Information, Xiangtan University, Xiangtan, Hunan, People's Republic of China
| | - Zhaoyang Ai
- Institute of Biophysics Linguistics, College of Foreign Languages, Hunan University, Changsha, Hunan, People's Republic of China
| |
Collapse
|
9
|
Ali T, Masood K, Irfan M, Draz U, Nagra AA, Asif M, Alshehri BM, Glowacz A, Tadeusiewicz R, Mahnashi MH, Yasin S. Multistage Segmentation of Prostate Cancer Tissues Using Sample Entropy Texture Analysis. ENTROPY 2020; 22:e22121370. [PMID: 33279915 PMCID: PMC7761953 DOI: 10.3390/e22121370] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Revised: 11/24/2020] [Accepted: 12/01/2020] [Indexed: 12/12/2022]
Abstract
In this study, a multistage segmentation technique is proposed that identifies cancerous cells in prostate tissue samples. The benign areas of the tissue are distinguished from the cancerous regions using the texture of glands. The texture is modeled based on wavelet packet features along with sample entropy values. In a multistage segmentation process, the mean-shift algorithm is applied on the pre-processed images to perform a coarse segmentation of the tissue. Wavelet packets are employed in the second stage to obtain fine details of the structured shape of glands. Finally, the texture of the gland is modeled by the sample entropy values, which identifies epithelial regions from stroma patches. Although there are three stages of the proposed algorithm, the computation is fast as wavelet packet features and sample entropy values perform robust modeling for the required regions of interest. A comparative analysis with other state-of-the-art texture segmentation techniques is presented and dice ratios are computed for the comparison. It has been observed that our algorithm not only outperforms other techniques, but, by introducing sample entropy features, identification of cancerous regions of tissues is achieved with 90% classification accuracy, which shows the robustness of the proposed algorithm.
Collapse
Affiliation(s)
- Tariq Ali
- Department of Computer Science, Sahiwal Campus, COMSATS University Islamabad, Sahiwal 57000, Pakistan;
| | - Khalid Masood
- Department of Computer Science, Lahore Garrison University, Lahore 54792, Pakistan; (K.M.); (A.A.N.); (M.A.)
| | - Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University, Najran 61441, Saudi Arabia
- Correspondence: (M.I.); (U.D.); (A.G.)
| | - Umar Draz
- Department of Computer Science, University of Sahiwal, Sahiwal, Punjab 57000, Pakistan
- Correspondence: (M.I.); (U.D.); (A.G.)
| | - Arfan Ali Nagra
- Department of Computer Science, Lahore Garrison University, Lahore 54792, Pakistan; (K.M.); (A.A.N.); (M.A.)
| | - Muhammad Asif
- Department of Computer Science, Lahore Garrison University, Lahore 54792, Pakistan; (K.M.); (A.A.N.); (M.A.)
| | - Bandar M. Alshehri
- Department of Clinical Laboratory, Faculty of Applied Medical Sciences, Najran University, P.O. Box 1988, Najran 61441, Saudi Arabia;
| | - Adam Glowacz
- Department of Automatic Control and Robotics, Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Science and Technology, al. A. Mickiewicza 30, 30-059 Kraków, Poland
- Correspondence: (M.I.); (U.D.); (A.G.)
| | - Ryszard Tadeusiewicz
- Department of Biocybernetics and Biomedical Engineering, Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Science and Technology, al. A. Mickiewicza 30, 30-059 Kraków, Poland;
| | - Mater H. Mahnashi
- Department of Medicinal Chemistry, Pharmacy School, Najran University, Najran 61441, Saudi Arabia;
| | - Sana Yasin
- Department of Computer Science, University of Okara, Okara 56130, Pakistan;
| |
Collapse
|
10
|
Xu H, Park S, Hwang TH. Computerized Classification of Prostate Cancer Gleason Scores from Whole Slide Images. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2020; 17:1871-1882. [PMID: 31536012 DOI: 10.1109/tcbb.2019.2941195] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Histological Gleason grading of tumor patterns is one of the most powerful prognostic predictors in prostate cancer. However, manual analysis and grading performed by pathologists are typically subjective and time-consuming. In this paper, we present an automatic technique for Gleason grading of prostate cancer from H&E stained whole slide pathology images using a set of novel completed and statistical local binary pattern (CSLBP) descriptors. First, the technique divides the whole slide image (WSI) into a set of small image tiles, where salient tumor tiles with high nuclei densities are selected for analysis. The CSLBP texture features that encode pixel intensity variations from circularly surrounding neighborhoods are extracted from salient image tiles to characterize different Gleason patterns. Finally, the CSLBP texture features computed from all tiles are integrated and utilized by the multi-class support vector machine (SVM) that assigns patient slides with different Gleason scores such as 6, 7, or ≥ 8. Experiments have been performed on 312 different patient cases selected from the cancer genome atlas (TCGA) and have achieved superior performances over state-of-the-art texture descriptors and baseline methods including deep learning models for prostate cancer Gleason grading.
Collapse
|
11
|
Vu T, Lai P, Raich R, Pham A, Fern XZ, Rao UA. A Novel Attribute-Based Symmetric Multiple Instance Learning for Histopathological Image Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3125-3136. [PMID: 32305904 PMCID: PMC7561004 DOI: 10.1109/tmi.2020.2987796] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Histopathological image analysis is a challenging task due to a diverse histology feature set as well as due to the presence of large non-informative regions in whole slide images. In this paper, we propose a multiple-instance learning (MIL) method for image-level classification as well as for annotating relevant regions in the image. In MIL, a common assumption is that negative bags contain only negative instances while positive bags contain one or more positive instances. This asymmetric assumption may be inappropriate for some application scenarios where negative bags also contain representative negative instances. We introduce a novel symmetric MIL framework associating each instance in a bag with an attribute which can be either negative, positive, or irrelevant. We extend the notion of relevance by introducing control over the number of relevant instances. We develop a probabilistic graphical model that incorporates the aforementioned paradigm and a corresponding computationally efficient inference for learning the model parameters and obtaining an instance level attribute-learning classifier. The effectiveness of the proposed method is evaluated on available histopathology datasets with promising results.
Collapse
|
12
|
Karimi D, Nir G, Fazli L, Black PC, Goldenberg L, Salcudean SE. Deep Learning-Based Gleason Grading of Prostate Cancer From Histopathology Images—Role of Multiscale Decision Aggregation and Data Augmentation. IEEE J Biomed Health Inform 2020; 24:1413-1426. [DOI: 10.1109/jbhi.2019.2944643] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
13
|
Banna GL, Olivier T, Rundo F, Malapelle U, Fraggetta F, Libra M, Addeo A. The Promise of Digital Biopsy for the Prediction of Tumor Molecular Features and Clinical Outcomes Associated With Immunotherapy. Front Med (Lausanne) 2019; 6:172. [PMID: 31417906 PMCID: PMC6685050 DOI: 10.3389/fmed.2019.00172] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2019] [Accepted: 07/11/2019] [Indexed: 12/11/2022] Open
Abstract
Immunotherapy by immune checkpoint inhibitors has emerged as an effective treatment for a slight proportion of patients with aggressive tumors. Currently, some molecular determinants, such as the expression of the programmed cell death ligand-1 (PD-L1) or the tumor mutational burden (TMB) have been used in the clinical practice as predictive biomarkers, although they fail in consistency, applicability, or reliability to precisely identify the responding patients mainly because of their spatial intratumoral heterogeneity. Therefore, new biomarkers for early prediction of patient response to immunotherapy, that could integrate several approaches, are eagerly sought. Novel methods of quantitative image analysis (such as radiomics or pathomics) might offer a comprehensive approach providing spatial and temporal information from macroscopic imaging features potentially predictive of underlying molecular drivers, tumor-immune microenvironment, tumor-related prognosis, and clinical outcome (in terms of response or toxicity) following immunotherapy. Preliminary results from radiomics and pathomics analysis have demonstrated their ability to correlate image features with PD-L1 tumor expression, high CD3 cell infiltration or CD8 cell expression, or to produce an image signature concordant with gene expression. Furthermore, the predictive power of radiomics and pathomics can be improved by combining information from other modalities, such as blood values or molecular features, leading to increase the accuracy of these models. Thus, "digital biopsy," which could be defined by non-invasive and non-consuming digital techniques provided by radiomics and pathomics, may have the potential to allow for personalized approach for cancer patients treated with immunotherapy.
Collapse
Affiliation(s)
- Giuseppe Luigi Banna
- Oncology Department, United Lincolnshire Hospital Trust, Lincoln, United Kingdom
| | - Timothée Olivier
- Oncology Department, University Hospital Geneva, Geneva, Switzerland
| | - Francesco Rundo
- ADG Central R&D - STMicroelectronics of Catania, Catania, Italy
| | - Umberto Malapelle
- Department of Public Health, University Federico II of Naples, Naples, Italy
| | | | - Massimo Libra
- Oncologic, Clinic and General Pathology Section, Department of Biomedical and Biotechnological Sciences, University of Catania, Catania, Italy
| | - Alfredo Addeo
- Oncology Department, University Hospital Geneva, Geneva, Switzerland
| |
Collapse
|
14
|
García G, Colomer A, Naranjo V. First-Stage Prostate Cancer Identification on Histopathological Images: Hand-Driven versus Automatic Learning. ENTROPY (BASEL, SWITZERLAND) 2019; 21:E356. [PMID: 33267070 PMCID: PMC7514840 DOI: 10.3390/e21040356] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 03/25/2019] [Accepted: 03/29/2019] [Indexed: 12/14/2022]
Abstract
Analysis of histopathological image supposes the most reliable procedure to identify prostate cancer. Most studies try to develop computer aid-systems to face the Gleason grading problem. On the contrary, we delve into the discrimination between healthy and cancerous tissues in its earliest stage, only focusing on the information contained in the automatically segmented gland candidates. We propose a hand-driven learning approach, in which we perform an exhaustive hand-crafted feature extraction stage combining in a novel way descriptors of morphology, texture, fractals and contextual information of the candidates under study. Then, we carry out an in-depth statistical analysis to select the most relevant features that constitute the inputs to the optimised machine-learning classifiers. Additionally, we apply for the first time on prostate segmented glands, deep-learning algorithms modifying the popular VGG19 neural network. We fine-tuned the last convolutional block of the architecture to provide the model specific knowledge about the gland images. The hand-driven learning approach, using a nonlinear Support Vector Machine, reports a slight outperforming over the rest of experiments with a final multi-class accuracy of 0.876 ± 0.026 in the discrimination between false glands (artefacts), benign glands and Gleason grade 3 glands.
Collapse
Affiliation(s)
- Gabriel García
- Instituto de Investigación e Innovación en Bioingeniería (I3B), Universitat Politècnica de València (UPV), Camino de Vera s/n, 46008 Valencia, Spain
| | | | | |
Collapse
|
15
|
A fractal based approach to evaluate the progression of esophageal squamous cell dysplasia. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2018.09.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
16
|
Wang X, Wang D, Yao Z, Xin B, Wang B, Lan C, Qin Y, Xu S, He D, Liu Y. Machine Learning Models for Multiparametric Glioma Grading With Quantitative Result Interpretations. Front Neurosci 2019; 12:1046. [PMID: 30686996 PMCID: PMC6337068 DOI: 10.3389/fnins.2018.01046] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2018] [Accepted: 12/24/2018] [Indexed: 12/11/2022] Open
Abstract
Gliomas are the most common primary malignant brain tumors in adults. Accurate grading is crucial as therapeutic strategies are often disparate for different grades and may influence patient prognosis. This study aims to provide an automated glioma grading platform on the basis of machine learning models. In this paper, we investigate contributions of multi-parameters from multimodal data including imaging parameters or features from the Whole Slide images (WSI) and the proliferation marker Ki-67 for automated brain tumor grading. For each WSI, we extract both visual parameters such as morphology parameters and sub-visual parameters including first-order and second-order features. On the basis of machine learning models, our platform classifies gliomas into grades II, III, and IV. Furthermore, we quantitatively interpret and reveal the important parameters contributing to grading with the Local Interpretable Model-Agnostic Explanations (LIME) algorithm. The quantitative analysis and explanation may assist clinicians to better understand the disease and accordingly to choose optimal treatments for improving clinical outcomes. The performance of our grading model was evaluated with cross-validation, which randomly divided the patients into non-overlapping training and testing sets and repeatedly validated the model on the different testing sets. The primary results indicated that this modular platform approach achieved the highest grading accuracy of 0.90 ± 0.04 with support vector machine (SVM) algorithm, with grading accuracies of 0.91 ± 0.08, 0.90 ± 0.08, and 0.90 ± 0.07 for grade II, III, and IV gliomas, respectively.
Collapse
Affiliation(s)
- Xiuying Wang
- School of Information Technologies, The University of Sydney, Sydney, NSW, Australia
| | - Dingqian Wang
- School of Information Technologies, The University of Sydney, Sydney, NSW, Australia
| | - Zhigang Yao
- Department of Pathology, Provincial Hospital Affiliated to Shandong University, Jinan, China
| | - Bowen Xin
- School of Information Technologies, The University of Sydney, Sydney, NSW, Australia
| | - Bao Wang
- School of Medicine, Shandong University, Jinan, China
| | - Chuanjin Lan
- School of Medicine, Shandong University, Jinan, China
| | - Yejun Qin
- Department of Pathology, Provincial Hospital Affiliated to Shandong University, Jinan, China
| | - Shangchen Xu
- Department of Neurosurgery, Provincial Hospital Affiliated to Shandong University, Jinan, China
| | - Dazhong He
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, China
| | - Yingchao Liu
- Department of Neurosurgery, Provincial Hospital Affiliated to Shandong University, Jinan, China
| |
Collapse
|
17
|
Li X, Plataniotis KN. Novel chromaticity similarity based color texture descriptor for digital pathology image analysis. PLoS One 2018; 13:e0206996. [PMID: 30419049 PMCID: PMC6231632 DOI: 10.1371/journal.pone.0206996] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 10/23/2018] [Indexed: 11/18/2022] Open
Abstract
Pathology images are color in nature due to the use of chemical staining in biopsy examination. Aware of the high color diagnosticity in pathology images, this work introduces a compact rotation-invariant texture descriptor, named quantized diagnostic counter-color pattern (QDCP), for digital pathology image understanding. On the basis of color similarity quantified by the inner product of unit-length color vectors, local counter-color textons are indexed first. Then the underlined distribution of QDCP indexes is estimated by an image-wise histogram. Since QDCP is computed based on color difference directly, it is robust to small color variation usually observed in pathology images. This study also discusses QDCP's extraction, parameter settings, and feature fusion techniques in a generic pathology image analysis pipeline, and introduces two more descriptors QDCP-LBP and QDCP/LBP. Experimentation on public pathology image sets suggests that the introduced color texture descriptors, especially QDCP-LBP, outperform prior color texture features in terms of strong descriptive power, low computational complexity, and high adaptability to different image sets.
Collapse
Affiliation(s)
- Xingyu Li
- Multimedia Lab, The Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada
- * E-mail:
| | - Konstantinos N. Plataniotis
- Multimedia Lab, The Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
18
|
Ren J, Karagoz K, Gatza ML, Singer EA, Sadimin E, Foran DJ, Qi X. Recurrence analysis on prostate cancer patients with Gleason score 7 using integrated histopathology whole-slide images and genomic data through deep neural networks. J Med Imaging (Bellingham) 2018; 5:047501. [PMID: 30840742 PMCID: PMC6237203 DOI: 10.1117/1.jmi.5.4.047501] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 10/23/2018] [Indexed: 12/22/2022] Open
Abstract
Prostate cancer is the most common nonskin-related cancer, affecting one in seven men in the United States. Gleason score, a sum of the primary and secondary Gleason patterns, is one of the best predictors of prostate cancer outcomes. Recently, significant progress has been made in molecular subtyping prostate cancer through the use of genomic sequencing. It has been established that prostate cancer patients presented with a Gleason score 7 show heterogeneity in both disease recurrence and survival. We built a unified system using publicly available whole-slide images and genomic data of histopathology specimens through deep neural networks to identify a set of computational biomarkers. Using a survival model, the experimental results on the public prostate dataset showed that the computational biomarkers extracted by our approach had hazard ratio as 5.73 and C -index as 0.74, which were higher than standard clinical prognostic factors and other engineered image texture features. Collectively, the results of this study highlight the important role of neural network analysis of prostate cancer and the potential of such approaches in other precision medicine applications.
Collapse
Affiliation(s)
- Jian Ren
- Rutgers, the State University of New Jersey, Department of Electrical and Computer Engineering, Piscataway, New Jersey, United States
| | - Kubra Karagoz
- Rutgers Cancer Institute of New Jersey, Department of Radiation Oncology, New Brunswick, New Jersey, United States
| | - Michael L. Gatza
- Rutgers Cancer Institute of New Jersey, Department of Radiation Oncology, New Brunswick, New Jersey, United States
| | - Eric A. Singer
- Rutgers Cancer Institute of New Jersey, Section of Urologic Oncology, New Brunswick, New Jersey, United States
| | - Evita Sadimin
- Rutgers Cancer Institute of New Jersey, Department of Pathology and Laboratory Medicine, New Brunswick, New Jersey, United States
| | - David J. Foran
- Rutgers Cancer Institute of New Jersey, Department of Pathology and Laboratory Medicine, New Brunswick, New Jersey, United States
| | - Xin Qi
- Rutgers Cancer Institute of New Jersey, Department of Pathology and Laboratory Medicine, New Brunswick, New Jersey, United States
| |
Collapse
|
19
|
Qu J, Hiruta N, Terai K, Nosato H, Murakawa M, Sakanashi H. Gastric Pathology Image Classification Using Stepwise Fine-Tuning for Deep Neural Networks. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:8961781. [PMID: 30034677 PMCID: PMC6033298 DOI: 10.1155/2018/8961781] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Revised: 05/14/2018] [Accepted: 05/27/2018] [Indexed: 02/06/2023]
Abstract
Deep learning using convolutional neural networks (CNNs) is a distinguished tool for many image classification tasks. Due to its outstanding robustness and generalization, it is also expected to play a key role to facilitate advanced computer-aided diagnosis (CAD) for pathology images. However, the shortage of well-annotated pathology image data for training deep neural networks has become a major issue at present because of the high-cost annotation upon pathologist's professional observation. Faced with this problem, transfer learning techniques are generally used to reinforcing the capacity of deep neural networks. In order to further boost the performance of the state-of-the-art deep neural networks and alleviate insufficiency of well-annotated data, this paper presents a novel stepwise fine-tuning-based deep learning scheme for gastric pathology image classification and establishes a new type of target-correlative intermediate datasets. Our proposed scheme is deemed capable of making the deep neural network imitating the pathologist's perception manner and of acquiring pathology-related knowledge in advance, but with very limited extra cost in data annotation. The experiments are conducted with both well-annotated gastric pathology data and the proposed target-correlative intermediate data on several state-of-the-art deep neural networks. The results congruously demonstrate the feasibility and superiority of our proposed scheme for boosting the classification performance.
Collapse
Affiliation(s)
- Jia Qu
- Department of Intelligent Interaction Technologies, University of Tsukuba, Tsukuba 305-8573, Japan
| | - Nobuyuki Hiruta
- Department of Surgical Pathology, Toho University Sakura Medical Center, Sakura 285-8741, Japan
| | - Kensuke Terai
- Department of Surgical Pathology, Toho University Sakura Medical Center, Sakura 285-8741, Japan
| | - Hirokazu Nosato
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan
| | - Masahiro Murakawa
- Department of Intelligent Interaction Technologies, University of Tsukuba, Tsukuba 305-8573, Japan
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan
| | - Hidenori Sakanashi
- Department of Intelligent Interaction Technologies, University of Tsukuba, Tsukuba 305-8573, Japan
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan
| |
Collapse
|
20
|
Sahran S, Albashish D, Abdullah A, Shukor NA, Hayati Md Pauzi S. Absolute cosine-based SVM-RFE feature selection method for prostate histopathological grading. Artif Intell Med 2018; 87:78-90. [PMID: 29680688 DOI: 10.1016/j.artmed.2018.04.002] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2017] [Revised: 04/02/2018] [Accepted: 04/07/2018] [Indexed: 01/09/2023]
Abstract
OBJECTIVE Feature selection (FS) methods are widely used in grading and diagnosing prostate histopathological images. In this context, FS is based on the texture features obtained from the lumen, nuclei, cytoplasm and stroma, all of which are important tissue components. However, it is difficult to represent the high-dimensional textures of these tissue components. To solve this problem, we propose a new FS method that enables the selection of features with minimal redundancy in the tissue components. METHODOLOGY We categorise tissue images based on the texture of individual tissue components via the construction of a single classifier and also construct an ensemble learning model by merging the values obtained by each classifier. Another issue that arises is overfitting due to the high-dimensional texture of individual tissue components. We propose a new FS method, SVM-RFE(AC), that integrates a Support Vector Machine-Recursive Feature Elimination (SVM-RFE) embedded procedure with an absolute cosine (AC) filter method to prevent redundancy in the selected features of the SV-RFE and an unoptimised classifier in the AC. RESULTS We conducted experiments on H&E histopathological prostate and colon cancer images with respect to three prostate classifications, namely benign vs. grade 3, benign vs. grade 4 and grade 3 vs. grade 4. The colon benchmark dataset requires a distinction between grades 1 and 2, which are the most difficult cases to distinguish in the colon domain. The results obtained by both the single and ensemble classification models (which uses the product rule as its merging method) confirm that the proposed SVM-RFE(AC) is superior to the other SVM and SVM-RFE-based methods. CONCLUSION We developed an FS method based on SVM-RFE and AC and successfully showed that its use enabled the identification of the most crucial texture feature of each tissue component. Thus, it makes possible the distinction between multiple Gleason grades (e.g. grade 3 vs. grade 4) and its performance is far superior to other reported FS methods.
Collapse
Affiliation(s)
- Shahnorbanun Sahran
- Pattern Recognition Research Group, Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, University Kebangsaan Malaysia, 43600 Bangi, Malaysia.
| | - Dheeb Albashish
- Computer Science Department, Prince Abdullah Bin Ghazi Faculty of Information Technology, Al-Balqa Applied University, Jordan.
| | - Azizi Abdullah
- Pattern Recognition Research Group, Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, University Kebangsaan Malaysia, 43600 Bangi, Malaysia.
| | - Nordashima Abd Shukor
- Department of Pathology, University Kebangsaan Malaysia Medical Center, 56000 Batu 9 Cheras, Malaysia.
| | - Suria Hayati Md Pauzi
- Department of Pathology, University Kebangsaan Malaysia Medical Center, 56000 Batu 9 Cheras, Malaysia.
| |
Collapse
|
21
|
Reljin N, Slavkovic-Ilic M, Tapia C, Cihoric N, Stankovic S. Multifractal-based nuclei segmentation in fish images. Biomed Microdevices 2018; 19:67. [PMID: 28776236 PMCID: PMC5543204 DOI: 10.1007/s10544-017-0208-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
The method for nuclei segmentation in fluorescence in-situ hybridization (FISH) images, based on the inverse multifractal analysis (IMFA) is proposed. From the blue channel of the FISH image in RGB format, the matrix of Holder exponents, with one-by-one correspondence with the image pixels, is determined first. The following semi-automatic procedure is proposed: initial nuclei segmentation is performed automatically from the matrix of Holder exponents by applying predefined hard thresholding; then the user evaluates the result and is able to refine the segmentation by changing the threshold, if necessary. After successful nuclei segmentation, the HER2 (human epidermal growth factor receptor 2) scoring can be determined in usual way: by counting red and green dots within segmented nuclei, and finding their ratio. The IMFA segmentation method is tested over 100 clinical cases, evaluated by skilled pathologist. Testing results show that the new method has advantages compared to already reported methods.
Collapse
Affiliation(s)
- Nikola Reljin
- Academic Technology Services, Princeton University, Princeton, NJ USA
| | - Marijeta Slavkovic-Ilic
- Innovation Center of the School of Electrical Engineering, University of Belgrade, Belgrade, Serbia
| | - Coya Tapia
- Division of Clinical Pathology, Institute of Pathology, University of Bern, Bern, Switzerland
| | - Nikola Cihoric
- Department of Radiation Oncology, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Srdjan Stankovic
- School of Electrical Engineering, University of Belgrade, Belgrade, Serbia
| |
Collapse
|
22
|
Ren J, Karagoz K, Gatza M, Foran DJ, Qi X. Differentiation among prostate cancer patients with Gleason score of 7 using histopathology whole-slide image and genomic data. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10579:1057904. [PMID: 30662142 PMCID: PMC6338219 DOI: 10.1117/12.2293193] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Prostate cancer is the most common non-skin related cancer affecting 1 in 7 men in the United States. Treatment of patients with prostate cancer still remains a difficult decision-making process that requires physicians to balance clinical benefits, life expectancy, comorbidities, and treatment-related side effects. Gleason score (a sum of the primary and secondary Gleason patterns) solely based on morphological prostate glandular architecture has shown as one of the best predictors of prostate cancer outcome. Significant progress has been made on molecular subtyping prostate cancer delineated through the increasing use of gene sequencing. Prostate cancer patients with Gleason score of 7 show heterogeneity in recurrence and survival outcomes. Therefore, we propose to assess the correlation between histopathology images and genomic data with disease recurrence in prostate tumors with a Gleason 7 score to identify prognostic markers. In the study, we identify image biomarkers within tissue WSIs by modeling the spatial relationship from automatically created patches as a sequence within WSI by adopting a recurrence network model, namely long short-term memory (LSTM). Our preliminary results demonstrate that integrating image biomarkers from CNN with LSTM and genomic pathway scores, is more strongly correlated with patients recurrence of disease compared to standard clinical markers and engineered image texture features. The study further demonstrates that prostate cancer patients with Gleason score of 4+3 have a higher risk of disease progression and recurrence compared to prostate cancer patients with Gleason score of 3+4.
Collapse
Affiliation(s)
- Jian Ren
- Dept. of Electrical and Computer Engineering, Rutgers University, Piscataway, NJ, USA
| | - Kubra Karagoz
- Rutgers Cancer Institute of New Jersey, New Brunswick, NJ, USA
| | - Michael Gatza
- Rutgers Cancer Institute of New Jersey, New Brunswick, NJ, USA
| | - David J Foran
- Rutgers Cancer Institute of New Jersey, New Brunswick, NJ, USA
| | - Xin Qi
- Rutgers Cancer Institute of New Jersey, New Brunswick, NJ, USA
| |
Collapse
|
23
|
Brahmaiah Naik J, Srinivasarao C, Babu Kande G. Local vector pattern with global index angles for a content‐based image retrieval system. J Assoc Inf Sci Technol 2017. [DOI: 10.1002/asi.23907] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Jatothu Brahmaiah Naik
- Research Scholar, JNTUKKakinada Andhra Pradesh India
- Assistant Professor, Department of Electronics & Communication Engineering, Vignan's Lara Institute of Technology & Science Andhra Pradesh India
| | - Chanamallu Srinivasarao
- Professor, ECE Department, JNTUK University college of Engineering, VizianagaramAndhra Pradesh India
| | - Giri Babu Kande
- Professor & HoD, ECE Department, Vasireddy Venkatadri Institute of Technology, Nambur, Guntur (Dt)Andhra Pradesh India
| |
Collapse
|
24
|
Jia Z, Huang X, Chang EIC, Xu Y. Constrained Deep Weak Supervision for Histopathology Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:2376-2388. [PMID: 28692971 DOI: 10.1109/tmi.2017.2724070] [Citation(s) in RCA: 90] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this paper, we develop a new weakly supervised learning algorithm to learn to segment cancerous regions in histopathology images. This paper is under a multiple instance learning (MIL) framework with a new formulation, deep weak supervision (DWS); we also propose an effective way to introduce constraints to our neural networks to assist the learning process. The contributions of our algorithm are threefold: 1) we build an end-to-end learning system that segments cancerous regions with fully convolutional networks (FCNs) in which image-to-image weakly-supervised learning is performed; 2) we develop a DWS formulation to exploit multi-scale learning under weak supervision within FCNs; and 3) constraints about positive instances are introduced in our approach to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL, is easy to implement and can be trained efficiently. Our system demonstrates the state-of-the-art results on large-scale histopathology image data sets and can be applied to various applications in medical imaging beyond histopathology images, such as MRI, CT, and ultrasound images.
Collapse
|
25
|
Chennubhotla C, Clarke LP, Fedorov A, Foran D, Harris G, Helton E, Nordstrom R, Prior F, Rubin D, Saltz JH, Shalley E, Sharma A. An Assessment of Imaging Informatics for Precision Medicine in Cancer. Yearb Med Inform 2017; 26:110-119. [PMID: 29063549 DOI: 10.15265/iy-2017-041] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Objectives: Precision medicine requires the measurement, quantification, and cataloging of medical characteristics to identify the most effective medical intervention. However, the amount of available data exceeds our current capacity to extract meaningful information. We examine the informatics needs to achieve precision medicine from the perspective of quantitative imaging and oncology. Methods: The National Cancer Institute (NCI) organized several workshops on the topic of medical imaging and precision medicine. The observations and recommendations are summarized herein. Results: Recommendations include: use of standards in data collection and clinical correlates to promote interoperability; data sharing and validation of imaging tools; clinician's feedback in all phases of research and development; use of open-source architecture to encourage reproducibility and reusability; use of challenges which simulate real-world situations to incentivize innovation; partnership with industry to facilitate commercialization; and education in academic communities regarding the challenges involved with translation of technology from the research domain to clinical utility and the benefits of doing so. Conclusions: This article provides a survey of the role and priorities for imaging informatics to help advance quantitative imaging in the era of precision medicine. While these recommendations were drawn from oncology, they are relevant and applicable to other clinical domains where imaging aids precision medicine.
Collapse
|
26
|
Breast cancer cell nuclei classification in histopathology images using deep neural networks. Int J Comput Assist Radiol Surg 2017; 13:179-191. [DOI: 10.1007/s11548-017-1663-9] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2017] [Accepted: 08/18/2017] [Indexed: 12/12/2022]
|
27
|
Xu Y, Jia Z, Wang LB, Ai Y, Zhang F, Lai M, Chang EIC. Large scale tissue histopathology image classification, segmentation, and visualization via deep convolutional activation features. BMC Bioinformatics 2017; 18:281. [PMID: 28549410 PMCID: PMC5446756 DOI: 10.1186/s12859-017-1685-x] [Citation(s) in RCA: 186] [Impact Index Per Article: 23.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2016] [Accepted: 05/15/2017] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Histopathology image analysis is a gold standard for cancer recognition and diagnosis. Automatic analysis of histopathology images can help pathologists diagnose tumor and cancer subtypes, alleviating the workload of pathologists. There are two basic types of tasks in digital histopathology image analysis: image classification and image segmentation. Typical problems with histopathology images that hamper automatic analysis include complex clinical representations, limited quantities of training images in a dataset, and the extremely large size of singular images (usually up to gigapixels). The property of extremely large size for a single image also makes a histopathology image dataset be considered large-scale, even if the number of images in the dataset is limited. RESULTS In this paper, we propose leveraging deep convolutional neural network (CNN) activation features to perform classification, segmentation and visualization in large-scale tissue histopathology images. Our framework transfers features extracted from CNNs trained by a large natural image database, ImageNet, to histopathology images. We also explore the characteristics of CNN features by visualizing the response of individual neuron components in the last hidden layer. Some of these characteristics reveal biological insights that have been verified by pathologists. According to our experiments, the framework proposed has shown state-of-the-art performance on a brain tumor dataset from the MICCAI 2014 Brain Tumor Digital Pathology Challenge and a colon cancer histopathology image dataset. CONCLUSIONS The framework proposed is a simple, efficient and effective system for histopathology image automatic analysis. We successfully transfer ImageNet knowledge as deep convolutional activation features to the classification and segmentation of histopathology images with little training data. CNN features are significantly more powerful than expert-designed features.
Collapse
Affiliation(s)
- Yan Xu
- State Key Laboratory of Software Development Environment and Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and Research Institute of Beihang University in Shenzhen, Beijing, China. .,Microsoft Research, Beijing, China.
| | - Zhipeng Jia
- Microsoft Research, Beijing, China.,Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
| | - Liang-Bo Wang
- Microsoft Research, Beijing, China.,Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
| | - Yuqing Ai
- Microsoft Research, Beijing, China.,Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
| | - Fang Zhang
- Microsoft Research, Beijing, China.,Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
| | - Maode Lai
- Department of Pathology, School of Medicine, Zhejiang University, Hangzhou, China
| | | |
Collapse
|
28
|
Takahashi R, Kajikawa Y. Computer-aided diagnosis: A survey with bibliometric analysis. Int J Med Inform 2017; 101:58-67. [DOI: 10.1016/j.ijmedinf.2017.02.004] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2016] [Revised: 01/28/2017] [Accepted: 02/04/2017] [Indexed: 12/18/2022]
|
29
|
Kwak JT, Hewitt SM. Multiview boosting digital pathology analysis of prostate cancer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 142:91-99. [PMID: 28325451 PMCID: PMC8171579 DOI: 10.1016/j.cmpb.2017.02.023] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2016] [Revised: 02/04/2017] [Accepted: 02/15/2017] [Indexed: 05/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Various digital pathology tools have been developed to aid in analyzing tissues and improving cancer pathology. The multi-resolution nature of cancer pathology, however, has not been fully analyzed and utilized. Here, we develop an automated, cooperative, and multi-resolution method for improving prostate cancer diagnosis. METHODS Digitized tissue specimen images are obtained from 5 tissue microarrays (TMAs). The TMAs include 70 benign and 135 cancer samples (TMA1), 74 benign and 89 cancer samples (TMA2), 70 benign and 115 cancer samples (TMA3), 79 benign and 82 cancer samples (TMA4), and 72 benign and 86 cancer samples (TMA5). The tissue specimen images are segmented using intensity- and texture-based features. Using the segmentation results, a number of morphological features from lumens and epithelial nuclei are computed to characterize tissues at different resolutions. Applying a multiview boosting algorithm, tissue characteristics, obtained from differing resolutions, are cooperatively combined to achieve accurate cancer detection. RESULTS In segmenting prostate tissues, the multiview boosting method achieved≥ 0.97 AUC using TMA1. For detecting cancers, the multiview boosting method achieved an AUC of 0.98 (95% CI: 0.97-0.99) as trained on TMA2 and tested on TMA3, TMA4, and TMA5. The proposed method was superior to single-view approaches, utilizing features from a single resolution or merging features from all the resolutions. Moreover, the performance of the proposed method was insensitive to the choice of the training dataset. Trained on TMA3, TMA4, and TMA5, the proposed method obtained an AUC of 0.97 (95% CI: 0.96-0.98), 0.98 (95% CI: 0.96-0.99), and 0.97 (95% CI: 0.96-0.98), respectively. CONCLUSIONS The multiview boosting method is capable of integrating information from multiple resolutions in an effective and efficient fashion and identifying cancers with high accuracy. The multiview boosting method holds a great potential for improving digital pathology tools and research.
Collapse
Affiliation(s)
- Jin Tae Kwak
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Korea.
| | - Stephen M Hewitt
- Tissue Array Research Program, Laboratory of Pathology, Center for Cancer Research, National Cancer Institute, National Institutes of Health, MD 20852, USA
| |
Collapse
|
30
|
Ren J, Sadimin E, Foran DJ, Qi X. Computer aided analysis of prostate histopathology images to support a refined Gleason grading system. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2017; 10133. [PMID: 30828124 DOI: 10.1117/12.2253887] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
The Gleason grading system used to render prostate cancer diagnosis has recently been updated to allow more accurate grade stratification and higher prognostic discrimination when compared to the traditional grading system. In spite of progress made in trying to standardize the grading process, there still remains approximately a 30% grading discrepancy between the score rendered by general pathologists and those provided by experts while reviewing needle biopsies for Gleason pattern 3 and 4, which accounts for more than 70% of daily prostate tissue slides at most institutions. We propose a new computational imaging method for Gleason pattern 3 and 4 classification, which better matches the newly established prostate cancer grading system. The computer-aided analysis method includes two phases. First, the boundary of each glandular region is automatically segmented using a deep convolutional neural network. Second, color, shape and texture features are extracted from superpixels corresponding to the outer and inner glandular regions and are subsequently forwarded to a random forest classifier to give a gradient score between 3 and 4 for each delineated glandular region. The F 1 score for glandular segmentation is 0.8460 and the classification accuracy is 0.83±0.03.
Collapse
Affiliation(s)
- Jian Ren
- Dept. of Electrical and Computer Engineering, Rutgers, The State University of NJ
| | - Evita Sadimin
- Cancer Institute of New Jersey, Rutgers, The State University of NJ
| | - David J Foran
- Cancer Institute of New Jersey, Rutgers, The State University of NJ
| | - Xin Qi
- Cancer Institute of New Jersey, Rutgers, The State University of NJ
| |
Collapse
|
31
|
Binary coordinate ascent: An efficient optimization technique for feature subset selection for machine learning. Knowl Based Syst 2016. [DOI: 10.1016/j.knosys.2016.07.026] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
32
|
|
33
|
Liu L, Tian Z, Zhang Z, Fei B. Computer-aided Detection of Prostate Cancer with MRI: Technology and Applications. Acad Radiol 2016; 23:1024-46. [PMID: 27133005 PMCID: PMC5355004 DOI: 10.1016/j.acra.2016.03.010] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2015] [Revised: 03/18/2016] [Accepted: 03/21/2016] [Indexed: 01/10/2023]
Abstract
One in six men will develop prostate cancer in his lifetime. Early detection and accurate diagnosis of the disease can improve cancer survival and reduce treatment costs. Recently, imaging of prostate cancer has greatly advanced since the introduction of multiparametric magnetic resonance imaging (mp-MRI). Mp-MRI consists of T2-weighted sequences combined with functional sequences including dynamic contrast-enhanced MRI, diffusion-weighted MRI, and magnetic resonance spectroscopy imaging. Because of the big data and variations in imaging sequences, detection can be affected by multiple factors such as observer variability and visibility and complexity of the lesions. To improve quantitative assessment of the disease, various computer-aided detection systems have been designed to help radiologists in their clinical practice. This review paper presents an overview of literatures on computer-aided detection of prostate cancer with mp-MRI, which include the technology and its applications. The aim of the survey is threefold: an introduction for those new to the field, an overview for those working in the field, and a reference for those searching for literature on a specific application.
Collapse
Affiliation(s)
- Lizhi Liu
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1841 Clifton Road NE, Atlanta, GA 30329; Center of Medical Imaging and Image-guided Therapy, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, 510060, China
| | - Zhiqiang Tian
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1841 Clifton Road NE, Atlanta, GA 30329
| | - Zhenfeng Zhang
- Center of Medical Imaging and Image-guided Therapy, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, 510060, China
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1841 Clifton Road NE, Atlanta, GA 30329; Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, 1841 Clifton Road NE, Atlanta, Georgia 30329; Winship Cancer Institute of Emory University, 1841 Clifton Road NE, Atlanta, Georgia 30329.
| |
Collapse
|
34
|
|
35
|
Kather JN, Weis CA, Bianconi F, Melchers SM, Schad LR, Gaiser T, Marx A, Zöllner FG. Multi-class texture analysis in colorectal cancer histology. Sci Rep 2016; 6:27988. [PMID: 27306927 PMCID: PMC4910082 DOI: 10.1038/srep27988] [Citation(s) in RCA: 168] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2016] [Accepted: 05/25/2016] [Indexed: 02/08/2023] Open
Abstract
Automatic recognition of different tissue types in histological images is an essential part in the digital pathology toolbox. Texture analysis is commonly used to address this problem; mainly in the context of estimating the tumour/stroma ratio on histological samples. However, although histological images typically contain more than two tissue types, only few studies have addressed the multi-class problem. For colorectal cancer, one of the most prevalent tumour types, there are in fact no published results on multiclass texture separation. In this paper we present a new dataset of 5,000 histological images of human colorectal cancer including eight different types of tissue. We used this set to assess the classification performance of a wide range of texture descriptors and classifiers. As a result, we found an optimal classification strategy that markedly outperformed traditional methods, improving the state of the art for tumour-stroma separation from 96.9% to 98.6% accuracy and setting a new standard for multiclass tissue separation (87.4% accuracy for eight classes). We make our dataset of histological images publicly available under a Creative Commons license and encourage other researchers to use it as a benchmark for their studies.
Collapse
Affiliation(s)
- Jakob Nikolas Kather
- Institute of Pathology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
- Institute of Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Cleo-Aron Weis
- Institute of Pathology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | | | - Susanne M. Melchers
- Department of Dermatology, Venereology and Allergology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Lothar R. Schad
- Institute of Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Timo Gaiser
- Institute of Pathology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Alexander Marx
- Institute of Pathology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Frank Gerrit Zöllner
- Institute of Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
36
|
Kwak JT, Hewitt SM, Kajdacsy-Balla AA, Sinha S, Bhargava R. Automated prostate tissue referencing for cancer detection and diagnosis. BMC Bioinformatics 2016; 17:227. [PMID: 27247129 PMCID: PMC4888626 DOI: 10.1186/s12859-016-1086-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2015] [Accepted: 05/17/2016] [Indexed: 01/21/2023] Open
Abstract
Background The current practice of histopathology review is limited in speed and accuracy. The current diagnostic paradigm does not fully describe the complex and complicated patterns of cancer. To address these needs, we develop an automated and objective system that facilitates a comprehensive and easy information management and decision-making. We also develop a tissue similarity measure scheme to broaden our understanding of tissue characteristics. Results The system includes a database of previously evaluated prostate tissue images, clinical information and a tissue retrieval process. In the system, a tissue is characterized by its morphology. The retrieval process seeks to find the closest matching cases with the tissue of interest. Moreover, we define 9 morphologic criteria by which a pathologist arrives at a histomorphologic diagnosis. Based on the 9 criteria, true tissue similarity is determined and serves as the gold standard of tissue retrieval. Here, we found a minimum of 4 and 3 matching cases, out of 5, for ~80 % and ~60 % of the queries when a match was defined as the tissue similarity score ≥5 and ≥6, respectively. We were also able to examine the relationship between tissues beyond the Gleason grading system due to the tissue similarity scoring system. Conclusions Providing the closest matching cases and their clinical information with pathologists will help to conduct consistent and reliable diagnoses. Thus, we expect the system to facilitate quality maintenance and quality improvement of cancer pathology. Electronic supplementary material The online version of this article (doi:10.1186/s12859-016-1086-6) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Jin Tae Kwak
- Department of Computer Science and Engineering, Sejong University, Seoul, 05006, Korea
| | - Stephen M Hewitt
- Tissue Array Research Program, Laboratory of Pathology, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, MD, 20850, USA
| | | | - Saurabh Sinha
- Department of Computer Science, University of Illinois at Urbana-Champaign, 2122 Siebel Center, 201 N. Goodwin Avenue, Urbana, IL, 61801, USA.
| | - Rohit Bhargava
- Beckman Institute for Advanced Science and Technology, Department of Bioengineering, Department of Mechanical Science and Engineering, Electrical and Computer Engineering, Chemical and Biomolecular Engineering and University of Illinois Cancer Center, University of Illinois at Urbana-Champaign, 4265 Beckman Institute 405 N. Mathews Avenue, Urbana, IL, 61801, USA.
| |
Collapse
|
37
|
Niazi MKK, Zynger DL, Clinton SK, Chen J, Koyuturk M, LaFramboise T, Gurcan M. Visually Meaningful Histopathological Features for Automatic Grading of Prostate Cancer. IEEE J Biomed Health Inform 2016; 21:1027-1038. [PMID: 28113734 DOI: 10.1109/jbhi.2016.2565515] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Histopathologic features, particularly Gleason grading system, have contributed significantly to the diagnosis, treatment, and prognosis of prostate cancer for decades. However, prostate cancer demonstrates enormous heterogeneity in biological behavior, thus establishing improved prognostic and predictive markers is particularly important to personalize therapy of men with clinically localized and newly diagnosed malignancy. Many automated grading systems have been developed for Gleason grading but acceptance in the medical community has been lacking due to poor interpretability. To overcome this problem, we developed a set of visually meaningful features to differentiate between low- and high-grade prostate cancer. The visually meaningful feature set consists of luminal and architectural features. For luminal features, we compute: 1) the shortest path from the nuclei to their closest luminal spaces; 2) ratio of the epithelial nuclei to the total number of nuclei. A nucleus is considered an epithelial nucleus if the shortest path between it and the luminal space does not contain any other nucleus; 3) average shortest distance of all nuclei to their closest luminal spaces. For architectural features, we compute directional changes in stroma and nuclei using directional filter banks. These features are utilized to create two subspaces; one for prostate images histopathologically assessed as low grade and the other for high grade. The grade associated with a subspace, which results in the minimum reconstruction error is considered as the prediction for the test image. For training, we utilized 43 regions of interest (ROI) images, which were extracted from 25 prostate whole slide images of The Cancer Genome Atlas (TCGA) database. For testing, we utilized an independent dataset of 88 ROIs extracted from 30 prostate whole slide images. The method resulted in 93.0% and 97.6% training and testing accuracies, respectively, for the spectrum of cases considered. The application of visually meaningful features provided promising levels of accuracy and consistency for grading prostate cancer.
Collapse
|
38
|
Vasiljevic J, Pribic J, Kanjer K, Jonakowski W, Sopta J, Nikolic-Vukosavljevic D, Radulovic M. Multifractal analysis of tumour microscopic images in the prediction of breast cancer chemotherapy response. Biomed Microdevices 2016; 17:93. [PMID: 26303582 DOI: 10.1007/s10544-015-9995-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Due to the individual heterogeneity, highly accurate predictors of chemotherapy response in invasive breast cancer are needed for effective chemotherapeutic management. However, predictive molecular determinants for conventional chemotherapy are only emerging and still incorporate a high degree of predictive variability. Based on such pressing need for predictive performance improvement, we explored the value of pre-therapy tumour histology image analysis to predict chemotherapy response. Fractal analysis was applied to hematoxylin/eosin stained archival tissue of diagnostic biopsies derived from 106 patients diagnosed with invasive breast cancer. The tissue was obtained prior to neoadjuvant anthracycline-based chemotherapy and patients were subsequently divided into three groups according to their actual chemotherapy response: partial pathological response (pPR), pathological complete response (pCR) and progressive/stable disease (PD/SD). It was shown that multifractal analysis of breast tumour tissue prior to chemotherapy indeed has the capacity to distinguish between histological images of the different chemotherapy responder groups with accuracies of 91.4% for pPR, 82.9% for pCR and 82.1% for PD/SD. F(α)max was identified as the most important predictive parameter. It represents the maximum of multifractal spectrum f(α), where α is the Hölder's exponent. This is the first study investigating the predictive value of multifractal analysis as a simple and cost-effective tool to predict the chemotherapy response. Improvements in chemotherapy prediction provide clinical benefit by enabling more optimal chemotherapy decisions, thus directly affecting the quality of life and survival.
Collapse
|
39
|
Waliszewski P. The Quantitative Criteria Based on the Fractal Dimensions, Entropy, and Lacunarity for the Spatial Distribution of Cancer Cell Nuclei Enable Identification of Low or High Aggressive Prostate Carcinomas. Front Physiol 2016; 7:34. [PMID: 26903883 PMCID: PMC4749702 DOI: 10.3389/fphys.2016.00034] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2015] [Accepted: 01/25/2016] [Indexed: 01/17/2023] Open
Abstract
Background: Tumor grading, PSA concentration, and stage determine a risk of prostate cancer patients with accuracy of about 70%. An approach based on the fractal geometrical model was proposed to eliminate subjectivity from the evaluation of tumor aggressiveness and to improve the prediction. This study was undertaken to validate classes of equivalence for the spatial distribution of cancer cell nuclei in a larger, independent set of prostate carcinomas. Methods: The global fractal capacity D0, information D1 and correlation D2 dimension, the local fractal dimension (LFD) and the local connected fractal dimension (LCFD), Shannon entropy H and lacunarity λ were measured using computer algorithms in digitalized images of both the reference set (n = 60) and the test set (n = 208) of prostate carcinomas. Results: Prostate carcinomas were re-stratified into seven classes of equivalence. The cut-off D0-values 1.5450, 1.5820, 1.6270, 1.6490, 1.6980, 1.7640 defined the classes from C1 to C7, respectively. The other measures but the D1 failed to define the same classes of equivalence. The pairs (D0, LFD), (D0, H), (D0, λ), (D1, LFD), (D1, H), (D1, λ) characterized the spatial distribution of cancer cell nuclei in each class. The co-application of those measures enabled the subordination of prostate carcinomas to one out of three clusters associated with different tumor aggressiveness. For D0 < 1.5820, LFD < 1.3, LCFD > 1.5, H < 0.7, and λ > 0.8, the class C1 or C2 contains low complexity low aggressive carcinomas exclusively. For D0 > 1.6980, LFD > 1.7644, LCFD > 1.7051, H > 0.9, and λ < 0.7, the class C6 or C7 contains high complexity high aggressive carcinomas. Conclusions: The cut-off D0-values defining the classes of equivalence were validated in this study. The cluster analysis suggested that the number of the subjective Gleason grades and the number of the objective classes of equivalence could be decreased from seven to three without a loss of clinically relevant information. Two novel quantitative criteria based on the complexity and the diversity measures enabled the identification of low or high aggressive prostate carcinomas and should be verified in the future multicenter, randomized studies.
Collapse
Affiliation(s)
- Przemyslaw Waliszewski
- Department of Urology, Alb Fils KlinikenGoeppingen, Germany; The Bȩdlewo Institute for Complexity ResearchPoznań, Poland
| |
Collapse
|
40
|
Noroozi N, Zakerolhosseini A. Differential diagnosis of squamous cell carcinoma in situ using skin histopathological images. Comput Biol Med 2016; 70:23-39. [PMID: 26780250 DOI: 10.1016/j.compbiomed.2015.12.024] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2015] [Revised: 12/28/2015] [Accepted: 12/29/2015] [Indexed: 10/22/2022]
Abstract
Differential diagnosis of squamous cell carcinoma in situ is of great importance for prognosis and decision making in the disease treatment procedure. Currently, differential diagnosis is done by pathologists based on examination of the histopathological slides under the microscope, which is time consuming and prone to inter and intra observer variability. In this paper, we have proposed an automated method for differential diagnosis of SCC in situ from actinic keratosis, which is known to be a precursor of squamous cell carcinoma. The process begins with epidermis segmentation and cornified layer removal. Then, epidermis axis is specified using the paths in its skeleton and the granular layer is removed via connected components analysis. Finally, diagnosis is done based on the classification result of intensity profiles extracted from lines perpendicular to the epidermis axis. The results of the study are in agreement with the gold standards provided by expert pathologists.
Collapse
Affiliation(s)
- Navid Noroozi
- Department of Computer Engineering and Science, Shahid Beheshti University, Tehran, Iran.
| | - Ali Zakerolhosseini
- Department of Computer Engineering and Science, Shahid Beheshti University, Tehran, Iran
| |
Collapse
|
41
|
Barker J, Hoogi A, Depeursinge A, Rubin DL. Automated classification of brain tumor type in whole-slide digital pathology images using local representative tiles. Med Image Anal 2015; 30:60-71. [PMID: 26854941 DOI: 10.1016/j.media.2015.12.002] [Citation(s) in RCA: 108] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2015] [Revised: 12/02/2015] [Accepted: 12/07/2015] [Indexed: 02/07/2023]
Abstract
Computerized analysis of digital pathology images offers the potential of improving clinical care (e.g. automated diagnosis) and catalyzing research (e.g. discovering disease subtypes). There are two key challenges thwarting computerized analysis of digital pathology images: first, whole slide pathology images are massive, making computerized analysis inefficient, and second, diverse tissue regions in whole slide images that are not directly relevant to the disease may mislead computerized diagnosis algorithms. We propose a method to overcome both of these challenges that utilizes a coarse-to-fine analysis of the localized characteristics in pathology images. An initial surveying stage analyzes the diversity of coarse regions in the whole slide image. This includes extraction of spatially localized features of shape, color and texture from tiled regions covering the slide. Dimensionality reduction of the features assesses the image diversity in the tiled regions and clustering creates representative groups. A second stage provides a detailed analysis of a single representative tile from each group. An Elastic Net classifier produces a diagnostic decision value for each representative tile. A weighted voting scheme aggregates the decision values from these tiles to obtain a diagnosis at the whole slide level. We evaluated our method by automatically classifying 302 brain cancer cases into two possible diagnoses (glioblastoma multiforme (N = 182) versus lower grade glioma (N = 120)) with an accuracy of 93.1% (p << 0.001). We also evaluated our method in the dataset provided for the 2014 MICCAI Pathology Classification Challenge, in which our method, trained and tested using 5-fold cross validation, produced a classification accuracy of 100% (p << 0.001). Our method showed high stability and robustness to parameter variation, with accuracy varying between 95.5% and 100% when evaluated for a wide range of parameters. Our approach may be useful to automatically differentiate between the two cancer subtypes.
Collapse
Affiliation(s)
- Jocelyn Barker
- Department of Medicine (Stanford Biomedical Informatics Research), Stanford University School of Medicine, CA, USA.
| | - Assaf Hoogi
- Department of Radiology, Stanford University School of Medicine, CA, USA.
| | - Adrien Depeursinge
- Department of Radiology, Stanford University School of Medicine, CA, USA; Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland.
| | - Daniel L Rubin
- Department of Radiology, Stanford University School of Medicine, CA, USA; Department of Medicine (Stanford Biomedical Informatics Research), Stanford University School of Medicine, CA, USA.
| |
Collapse
|
42
|
Hameed KS, Banumathi A, Ulaganathan G. Performance evaluation of maximal separation techniques in immunohistochemical scoring of tissue images. Micron 2015; 79:29-35. [DOI: 10.1016/j.micron.2015.07.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2015] [Revised: 07/28/2015] [Accepted: 07/28/2015] [Indexed: 10/23/2022]
|
43
|
Fernández-Carrobles MM, Bueno G, Déniz O, Salido J, García-Rojo M, González-López L. Influence of Texture and Colour in Breast TMA Classification. PLoS One 2015; 10:e0141556. [PMID: 26513238 PMCID: PMC4626403 DOI: 10.1371/journal.pone.0141556] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2015] [Accepted: 10/09/2015] [Indexed: 11/18/2022] Open
Abstract
Breast cancer diagnosis is still done by observation of biopsies under the microscope. The development of automated methods for breast TMA classification would reduce diagnostic time. This paper is a step towards the solution for this problem and shows a complete study of breast TMA classification based on colour models and texture descriptors. The TMA images were divided into four classes: i) benign stromal tissue with cellularity, ii) adipose tissue, iii) benign and benign anomalous structures, and iv) ductal and lobular carcinomas. A relevant set of features was obtained on eight different colour models from first and second order Haralick statistical descriptors obtained from the intensity image, Fourier, Wavelets, Multiresolution Gabor, M-LBP and textons descriptors. Furthermore, four types of classification experiments were performed using six different classifiers: (1) classification per colour model individually, (2) classification by combination of colour models, (3) classification by combination of colour models and descriptors, and (4) classification by combination of colour models and descriptors with a previous feature set reduction. The best result shows an average of 99.05% accuracy and 98.34% positive predictive value. These results have been obtained by means of a bagging tree classifier with combination of six colour models and the use of 1719 non-correlated (correlation threshold of 97%) textural features based on Statistical, M-LBP, Gabor and Spatial textons descriptors.
Collapse
Affiliation(s)
| | - Gloria Bueno
- VISILAB, Universidad de Castilla-La Mancha, Ciudad Real, Spain
- * E-mail: (MMFC); (GB)
| | - Oscar Déniz
- VISILAB, Universidad de Castilla-La Mancha, Ciudad Real, Spain
| | - Jesús Salido
- VISILAB, Universidad de Castilla-La Mancha, Ciudad Real, Spain
| | - Marcial García-Rojo
- Department of Pathology, Hospital de Jerez de la Frontera, Jerez de la Frontera, Cádiz, Spain
| | - Lucía González-López
- Department of Pathology, Hospital General Universitario de Ciudad Real, Ciudad Real, Spain
| |
Collapse
|
44
|
Computerized measurement of melanocytic tumor depth in skin histopathological images. Micron 2015; 77:44-56. [DOI: 10.1016/j.micron.2015.05.007] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2015] [Revised: 05/10/2015] [Accepted: 05/10/2015] [Indexed: 11/21/2022]
|
45
|
Fractal analysis and the diagnostic usefulness of silver staining nucleolar organizer regions in prostate adenocarcinoma. Anal Cell Pathol (Amst) 2015; 2015:250265. [PMID: 26366372 PMCID: PMC4558419 DOI: 10.1155/2015/250265] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2015] [Accepted: 07/13/2015] [Indexed: 11/17/2022] Open
Abstract
Pathological diagnosis of prostate adenocarcinoma often requires complementary methods. On prostate biopsy tissue from 39 patients including benign nodular hyperplasia (BNH), atypical adenomatous hyperplasia (AAH), and adenocarcinomas, we have performed combined histochemical-immunohistochemical stainings for argyrophilic nucleolar organizer regions (AgNORs) and glandular basal cells. After ascertaining the pathology, we have analyzed the number, roundness, area, and fractal dimension of individual AgNORs or of their skeleton-filtered maps. We have optimized here for the first time a combination of AgNOR morphological denominators that would reflect best the differences between these pathologies. The analysis of AgNORs' roundness, averaged from large composite images, revealed clear-cut lower values in adenocarcinomas compared to benign and atypical lesions but with no differences between different Gleason scores. Fractal dimension (FD) of AgNOR silhouettes not only revealed significant lower values for global cancer images compared to AAH and BNH images, but was also able to differentiate between Gleason pattern 2 and Gleason patterns 3–5 adenocarcinomas. Plotting the frequency distribution of the FDs for different pathologies showed clear differences between all Gleason patterns and BNH. Together with existing morphological classifiers, AgNOR analysis might contribute to a faster and more reliable machine-assisted screening of prostatic adenocarcinoma, as an essential aid for pathologists.
Collapse
|
46
|
Qin W, Baran U, Wang R. Lymphatic response to depilation-induced inflammation in mouse ear assessed with label-free optical lymphangiography. Lasers Surg Med 2015. [PMID: 26224650 DOI: 10.1002/lsm.22387] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND OBJECTIVES Optical microangiography (OMAG) is a noninvasive technique capable of imaging 3D microvasculature. OMAG-based optical lymphangiography has been developed for 3D visualization of lymphatic vessels without the need for exogenous contrast agents. In this study, we utilize the optical lymphangiography to investigate dynamic changes in lymphatic response within skin tissue to depilation-induced inflammation by using mouse ear as a simple tissue model. MATERIALS AND METHODS A spectral-domain optical coherence tomography (OCT) system is used in this study to acquire volumetric images of mouse ear. The system operates under the ultrahigh-sensitive OMAG scanning protocol with five repetitions for each B frame. An improved adaptive-threshold-based method is proposed to segment lymphatic vessels from OCT microstructure images. Depilation is achieved by placing hair removal lotion on mouse ear pinna for 5 minutes. Three acquisitions are made before depilation, 3-minute and 30-minute post-depilation, respectively. RESULTS Right after the application of depilation lotion on the skin, we observe that the blind-ended sacs of initial lymphatics are mainly visible in a specific area of the normal tissue. At 5 minutes, more collecting lymphatic vessels start to form, evidenced by their valve structure that only exists in collecting lymphatic vessels. The lymphangiogenesis is almost completed within 8 minutes in the inflammatory tissue. CONCLUSIONS Our experimental results demonstrate that the OMAG-based optical lymphangiography has great potential to improve the understanding of lymphatic system in response to various physiological conditions, thus would benefit the development of effective therapeutics.
Collapse
Affiliation(s)
- Wan Qin
- Department of Bioengineering, University of Washington, 3720 15th Ave NE, Seattle, Washington 98195-5061
| | - Utku Baran
- Department of Bioengineering, University of Washington, 3720 15th Ave NE, Seattle, Washington 98195-5061
| | - Ruikang Wang
- Department of Bioengineering, University of Washington, 3720 15th Ave NE, Seattle, Washington 98195-5061.,Department of Ophthalmology, University of Washington, 3720 15th Ave NE, Seattle, Washington 98195-5061
| |
Collapse
|
47
|
Yap CK, Kalaw EM, Singh M, Chong KT, Giron DM, Huang CH, Cheng L, Law YN, Lee HK. Automated image based prominent nucleoli detection. J Pathol Inform 2015; 6:39. [PMID: 26167383 PMCID: PMC4485194 DOI: 10.4103/2153-3539.159232] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2014] [Accepted: 01/07/2015] [Indexed: 11/19/2022] Open
Abstract
Introduction: Nucleolar changes in cancer cells are one of the cytologic features important to the tumor pathologist in cancer assessments of tissue biopsies. However, inter-observer variability and the manual approach to this work hamper the accuracy of the assessment by pathologists. In this paper, we propose a computational method for prominent nucleoli pattern detection. Materials and Methods: Thirty-five hematoxylin and eosin stained images were acquired from prostate cancer, breast cancer, renal clear cell cancer and renal papillary cell cancer tissues. Prostate cancer images were used for the development of a computer-based automated prominent nucleoli pattern detector built on a cascade farm. An ensemble of approximately 1000 cascades was constructed by permuting different combinations of classifiers such as support vector machines, eXclusive component analysis, boosting, and logistic regression. The output of cascades was then combined using the RankBoost algorithm. The output of our prominent nucleoli pattern detector is a ranked set of detected image patches of patterns of prominent nucleoli. Results: The mean number of detected prominent nucleoli patterns in the top 100 ranked detected objects was 58 in the prostate cancer dataset, 68 in the breast cancer dataset, 86 in the renal clear cell cancer dataset, and 76 in the renal papillary cell cancer dataset. The proposed cascade farm performs twice as good as the use of a single cascade proposed in the seminal paper by Viola and Jones. For comparison, a naive algorithm that randomly chooses a pixel as a nucleoli pattern would detect five correct patterns in the first 100 ranked objects. Conclusions: Detection of sparse nucleoli patterns in a large background of highly variable tissue patterns is a difficult challenge our method has overcome. This study developed an accurate prominent nucleoli pattern detector with the potential to be used in the clinical settings.
Collapse
Affiliation(s)
- Choon K Yap
- Imaging Informatics Division, Bioinformatics Institute, 30 Biopolis Street, #07-01, Matrix 138671, Novena, Singapore
| | - Emarene M Kalaw
- Imaging Informatics Division, Bioinformatics Institute, 30 Biopolis Street, #07-01, Matrix 138671, Novena, Singapore ; Department of Pathology, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, 308433, Novena, Singapore
| | - Malay Singh
- Imaging Informatics Division, Bioinformatics Institute, 30 Biopolis Street, #07-01, Matrix 138671, Novena, Singapore ; Department of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, 117417, Novena, Singapore
| | - Kian T Chong
- Department of Urology, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, 308433, Novena, Singapore
| | - Danilo M Giron
- Department of Pathology, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, 308433, Novena, Singapore
| | - Chao-Hui Huang
- Imaging Informatics Division, Bioinformatics Institute, 30 Biopolis Street, #07-01, Matrix 138671, Novena, Singapore
| | - Li Cheng
- Imaging Informatics Division, Bioinformatics Institute, 30 Biopolis Street, #07-01, Matrix 138671, Novena, Singapore
| | - Yan N Law
- Imaging Informatics Division, Bioinformatics Institute, 30 Biopolis Street, #07-01, Matrix 138671, Novena, Singapore
| | - Hwee Kuan Lee
- Imaging Informatics Division, Bioinformatics Institute, 30 Biopolis Street, #07-01, Matrix 138671, Novena, Singapore
| |
Collapse
|
48
|
Waliszewski P, Wagenlehner F, Gattenlöhner S, Weidner W. On the relationship between tumor structure and complexity of the spatial distribution of cancer cell nuclei: a fractal geometrical model of prostate carcinoma. Prostate 2015; 75:399-414. [PMID: 25545623 DOI: 10.1002/pros.22926] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/11/2014] [Accepted: 09/30/2014] [Indexed: 02/06/2023]
Abstract
BACKGROUND A risk of the prostate cancer patient is defined by both the objective and subjective criteria, that is, PSA concentration, Gleason score, and pTNM-stage. The subjectivity of tumor grading influences the risk assessment owing to a large inter- and intra-observer variability. Pathologists propose a central prostate pathology review as a remedy for this problem; yet, the review cannot eliminate the subjectivity from the diagnostic algorithm. The spatial distribution of cancer cell nuclei changes during tumor progression. It implies changes in complexity measured by the capacity dimension D0, the information dimension D1, and the correlation dimension D2. METHODS The cornerstone of the approach is a model of prostate carcinomas composed of the circular fractals CF(4), CF(6 + 0), and CF(6 + 1). This model is both geometrical and analytical, that is, its structure is well-defined, the capacity fractal dimension D0 can be calculated for the infinite circular fractals, and the dimensions D0, D1, D2 can be computed for their finite counterparts representing distribution of cell nuclei. The model enabled both the calibration of the software and the validation of the measurements in 124 prostate carcinomas. The ROC analysis defined the cut-off D0 values for seven classes of complexity. RESULTS The Gleason classification matched in part with the classification based on the D0 values. The mean ROC sensitivity was 81.3% and the mean ROC specificity 75.2%. Prostate carcinomas were re-stratified into seven classes of complexity according to their D0 values. This increased both the mean ROC sensitivity and the mean ROC specificity to 100%. All homogeneous Gleason patterns were subordinated to the class C1, C4, or C7. D0 = 1.5820 was the cut-off D0 value between the complexity class C2 and C3 representing low-risk cancers and intermediate-risk cancers, respectively. CONCLUSIONS The global fractal dimensions eliminate the subjectivity in the diagnostic algorithm of prostate cancer. Those complexity measures enable the objective subordination of carcinomas to the well-defined complexity classes, and define subgroups of carcinomas with very low malignant potential (complexity class C1) or at a large risk of progression (complexity ass C7).
Collapse
|
49
|
Jitaree S, Phinyomark A, Boonyaphiphat P, Phukpattaranont P. Cell type classifiers for breast cancer microscopic images based on fractal dimension texture analysis of image color layers. SCANNING 2015; 37:145-151. [PMID: 25689353 DOI: 10.1002/sca.21191] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2014] [Revised: 12/31/2014] [Accepted: 01/09/2015] [Indexed: 06/04/2023]
Abstract
Having a classifier of cell types in a breast cancer microscopic image (BCMI), obtained with immunohistochemical staining, is required as part of a computer-aided system that counts the cancer cells in such BCMI. Such quantitation by cell counting is very useful in supporting decisions and planning of the medical treatment of breast cancer. This study proposes and evaluates features based on texture analysis by fractal dimension (FD), for the classification of histological structures in a BCMI into either cancer cells or non-cancer cells. The cancer cells include positive cells (PC) and negative cells (NC), while the normal cells comprise stromal cells (SC) and lymphocyte cells (LC). The FD feature values were calculated with the box-counting method from binarized images, obtained by automatic thresholding with Otsu's method of the grayscale images for various color channels. A total of 12 color channels from four color spaces (RGB, CIE-L*a*b*, HSV, and YCbCr) were investigated, and the FD feature values from them were used with decision tree classifiers. The BCMI data consisted of 1,400, 1,200, and 800 images with pixel resolutions 128 × 128, 192 × 192, and 256 × 256, respectively. The best cross-validated classification accuracy was 93.87%, for distinguishing between cancer and non-cancer cells, obtained using the Cr color channel with window size 256. The results indicate that the proposed algorithm, based on fractal dimension features extracted from a color channel, performs well in the automatic classification of the histology in a BCMI. This might support accurate automatic cell counting in a computer-assisted system for breast cancer diagnosis.
Collapse
Affiliation(s)
- Sirinapa Jitaree
- Department of Electrical Engineering, Faculty of Engineering, Prince of Songkla University, Hat Yai, Songkhla, Thailand
| | | | | | | |
Collapse
|
50
|
Zhang X, Liu W, Dundar M, Badve S, Zhang S. Towards large-scale histopathological image analysis: hashing-based image retrieval. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:496-506. [PMID: 25314696 DOI: 10.1109/tmi.2014.2361481] [Citation(s) in RCA: 77] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Automatic analysis of histopathological images has been widely utilized leveraging computational image-processing methods and modern machine learning techniques. Both computer-aided diagnosis (CAD) and content-based image-retrieval (CBIR) systems have been successfully developed for diagnosis, disease detection, and decision support in this area. Recently, with the ever-increasing amount of annotated medical data, large-scale and data-driven methods have emerged to offer a promise of bridging the semantic gap between images and diagnostic information. In this paper, we focus on developing scalable image-retrieval techniques to cope intelligently with massive histopathological images. Specifically, we present a supervised kernel hashing technique which leverages a small amount of supervised information in learning to compress a 10 000-dimensional image feature vector into only tens of binary bits with the informative signatures preserved. These binary codes are then indexed into a hash table that enables real-time retrieval of images in a large database. Critically, the supervised information is employed to bridge the semantic gap between low-level image features and high-level diagnostic information. We build a scalable image-retrieval framework based on the supervised hashing technique and validate its performance on several thousand histopathological images acquired from breast microscopic tissues. Extensive evaluations are carried out in terms of image classification (i.e., benign versus actionable categorization) and retrieval tests. Our framework achieves about 88.1% classification accuracy as well as promising time efficiency. For example, the framework can execute around 800 queries in only 0.01 s, comparing favorably with other commonly used dimensionality reduction and feature selection methods.
Collapse
|