51
|
Casas L, Navab N, Demirci S. Patient 3D body pose estimation from pressure imaging. Int J Comput Assist Radiol Surg 2018; 14:517-524. [PMID: 30552647 DOI: 10.1007/s11548-018-1895-3] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Accepted: 11/30/2018] [Indexed: 11/30/2022]
Abstract
PURPOSE In-bed motion monitoring has become of great interest for a variety of clinical applications. Image-based approaches could be seen as a natural non-intrusive approach for this purpose; however, video devices require special challenging settings for a clinical environment. We propose to estimate the patient's posture from pressure sensors' data mapped to images. METHODS We introduce a deep learning method to retrieve human poses from pressure sensors data. In addition, we present a second approach that is based on a hashing content-retrieval approach. RESULTS Our results show good performance with both presented methods even in poses where the subject has minimal contact with the sensors. Moreover, we show that deep learning approaches could be used in this medical application despite the limited amount of available training data. Our ConvNet approach provides an overall posture even when the patient has less contact with the mattress surface. In addition, we show that both methods could be used in real-time patient monitoring. CONCLUSIONS We have provided two methods to successfully perform real-time in-bed patient pose estimation, which is robust to different sizes of patient and activities. Furthermore, it can provide an overall posture even when the patient has less contact with the mattress surface.
Collapse
Affiliation(s)
- Leslie Casas
- Computer Aided Medical Procedures, Technische Universität München, Boltzmannstr 3, 85748, Garching, Germany.
| | - Nassir Navab
- Computer Aided Medical Procedures, Technische Universität München, Boltzmannstr 3, 85748, Garching, Germany
| | - Stefanie Demirci
- Computer Aided Medical Procedures, Technische Universität München, Boltzmannstr 3, 85748, Garching, Germany
| |
Collapse
|
52
|
Gu Y, Yang J. Densely-Connected Multi-Magnification Hashing for Histopathological Image Retrieval. IEEE J Biomed Health Inform 2018; 23:1683-1691. [PMID: 30475737 DOI: 10.1109/jbhi.2018.2882647] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Content-based medical image retrieval is an important computer-aided diagnosis technique providing the clinicians with interpretative references based on visual similarity. In this paper, we focus on the tasks of histopathological image retrieval for breast cancer diagnosis. The densely-connected multi-magnification (DCMMH) framework is proposed to generate the discriminative binary codes by exploiting the histopathological images with multiple magnification factors. The low-magnification images are boosted by the accumulated similarity based on local patches that also regularize the feature learning of high-magnification images. In order to fully utilize the information across different magnification levels, a densely-connected architecture is finally deployed for high-low magnification pairs of datasets. Experiments on BreakHis dataset demonstrate that, DCMMH outperforms the previous hashing methods on histopathological image retrieval.
Collapse
|
53
|
3D local ternary co-occurrence patterns for natural, texture, face and bio medical image retrieval. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.06.027] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
54
|
Niazi MKK, Senaras C, Pennell M, Arole V, Tozbikian G, Gurcan MN. Relationship between the Ki67 index and its area based approximation in breast cancer. BMC Cancer 2018; 18:867. [PMID: 30176814 PMCID: PMC6122570 DOI: 10.1186/s12885-018-4735-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Accepted: 08/08/2018] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND The Ki67 Index has been extensively studied as a prognostic biomarker in breast cancer. However, its clinical adoption is largely hampered by the lack of a standardized method to assess Ki67 that limits inter-laboratory reproducibility. It is important to standardize the computation of the Ki67 Index before it can be effectively used in clincial practice. METHOD In this study, we develop a systematic approach towards standardization of the Ki67 Index. We first create the ground truth consisting of tumor positive and tumor negative nuclei by registering adjacent breast tissue sections stained with Ki67 and H&E. The registration is followed by segmentation of positive and negative nuclei within tumor regions from Ki67 images. The true Ki67 Index is then approximated with a linear model of the area of positive to the total area of tumor nuclei. RESULTS When tested on 75 images of Ki67 stained breast cancer biopsies, the proposed method resulted in an average root mean square error of 3.34. In comparison, an expert pathologist resulted in an average root mean square error of 9.98 and an existing automated approach produced an average root mean square error of 5.64. CONCLUSIONS We show that it is possible to approximate the true Ki67 Index accurately without detecting individual nuclei and also statically demonstrate the weaknesses of commonly adopted approaches that use both tumor and non-tumor regions together while compensating for the latter with higher order approximations.
Collapse
Affiliation(s)
| | - Caglar Senaras
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, USA
| | - Michael Pennell
- Division of Biostatistics, College of Public Health, The Ohio State University, Columbus, USA
| | - Vidya Arole
- Department of Biomedical Informatics, The Ohio State University, Columbus, USA
| | - Gary Tozbikian
- Department of Pathology, The Ohio State University, Columbus, USA
| | - Metin N. Gurcan
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, USA
| |
Collapse
|
55
|
A Novel Liver Image Classification Method Using Perceptual Hash-Based Convolutional Neural Network. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2018. [DOI: 10.1007/s13369-018-3454-1] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
56
|
Mining Big Neuron Morphological Data. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2018; 2018:8234734. [PMID: 30034462 PMCID: PMC6035829 DOI: 10.1155/2018/8234734] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2018] [Revised: 05/09/2018] [Accepted: 05/24/2018] [Indexed: 11/26/2022]
Abstract
The advent of automatic tracing and reconstruction technology has led to a surge in the number of neurons 3D reconstruction data and consequently the neuromorphology research. However, the lack of machine-driven annotation schema to automatically detect the types of the neurons based on their morphology still hinders the development of this branch of science. Neuromorphology is important because of the interplay between the shape and functionality of neurons and the far-reaching impact on the diagnostics and therapeutics in neurological disorders. This survey paper provides a comprehensive research in the field of automatic neurons classification and presents the existing challenges, methods, tools, and future directions for automatic neuromorphology analytics. We summarize the major automatic techniques applicable in the field and propose a systematic data processing pipeline for automatic neuron classification, covering data capturing, preprocessing, analyzing, classification, and retrieval. Various techniques and algorithms in machine learning are illustrated and compared to the same dataset to facilitate ongoing research in the field.
Collapse
|
57
|
Hu B, Tang Y, Chang EIC, Fan Y, Lai M, Xu Y. Unsupervised Learning for Cell-Level Visual Representation in Histopathology Images With Generative Adversarial Networks. IEEE J Biomed Health Inform 2018; 23:1316-1328. [PMID: 29994411 DOI: 10.1109/jbhi.2018.2852639] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The visual attributes of cells, such as the nuclear morphology and chromatin openness, are critical for histopathology image analysis. By learning cell-level visual representation, we can obtain a rich mix of features that are highly reusable for various tasks, such as cell-level classification, nuclei segmentation, and cell counting. In this paper, we propose a unified generative adversarial networks architecture with a new formulation of loss to perform robust cell-level visual representation learning in an unsupervised setting. Our model is not only label-free and easily trained but also capable of cell-level unsupervised classification with interpretable visualization, which achieves promising results in the unsupervised classification of bone marrow cellular components. Based on the proposed cell-level visual representation learning, we further develop a pipeline that exploits the varieties of cellular elements to perform histopathology image classification, the advantages of which are demonstrated on bone marrow datasets.
Collapse
|
58
|
Zheng Y, Jiang Z, Zhang H, Xie F, Ma Y, Shi H, Zhao Y. Size-Scalable Content-Based Histopathological Image Retrieval From Database That Consists of WSIs. IEEE J Biomed Health Inform 2018; 22:1278-1287. [DOI: 10.1109/jbhi.2017.2723014] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
59
|
Zheng Y, Jiang Z, Zhang H, Xie F, Ma Y, Shi H, Zhao Y. Histopathological Whole Slide Image Analysis Using Context-Based CBIR. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1641-1652. [PMID: 29969415 DOI: 10.1109/tmi.2018.2796130] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Histopathological image classification (HIC) and content-based histopathological image retrieval (CBHIR) are two promising applications for the histopathological whole slide image (WSI) analysis. HIC can efficiently predict the type of lesion involved in a histopathological image. In general, HIC can aid pathologists in locating high-risk cancer regions from a WSI by providing a cancerous probability map for the WSI. In contrast, CBHIR was developed to allow searches for regions with similar content for a region of interest (ROI) from a database consisting of historical cases. Sets of cases with similar content are accessible to pathologists, which can provide more valuable references for diagnosis. A drawback of the recent CBHIR framework is that a query ROI needs to be manually selected from a WSI. An automatic CBHIR approach for a WSI-wise analysis needs to be developed. In this paper, we propose a novel aided-diagnosis framework of breast cancer using whole slide images, which shares the advantages of both HIC and CBHIR. In our framework, CBHIR is automatically processed throughout the WSI, based on which a probability map regarding the malignancy of breast tumors is calculated. Through the probability map, the malignant regions in WSIs can be easily recognized. Furthermore, the retrieval results corresponding to each sub-region of the WSIs are recorded during the automatic analysis and are available to pathologists during their diagnosis. Our method was validated on fully annotated WSI data sets of breast tumors. The experimental results certify the effectiveness of the proposed method.
Collapse
|
60
|
Ma Y, Jiang Z, Zhang H, Xie F, Zheng Y, Shi H, Zhao Y, Shi J. Generating region proposals for histopathological whole slide image retrieval. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 159:1-10. [PMID: 29650303 DOI: 10.1016/j.cmpb.2018.02.020] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2017] [Revised: 01/15/2018] [Accepted: 02/22/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Content-based image retrieval is an effective method for histopathological image analysis. However, given a database of huge whole slide images (WSIs), acquiring appropriate region-of-interests (ROIs) for training is significant and difficult. Moreover, histopathological images can only be annotated by pathologists, resulting in the lack of labeling information. Therefore, it is an important and challenging task to generate ROIs from WSI and retrieve image with few labels. METHODS This paper presents a novel unsupervised region proposing method for histopathological WSI based on Selective Search. Specifically, the WSI is over-segmented into regions which are hierarchically merged until the WSI becomes a single region. Nucleus-oriented similarity measures for region mergence and Nucleus-Cytoplasm color space for histopathological image are specially defined to generate accurate region proposals. Additionally, we propose a new semi-supervised hashing method for image retrieval. The semantic features of images are extracted with Latent Dirichlet Allocation and transformed into binary hashing codes with Supervised Hashing. RESULTS The methods are tested on a large-scale multi-class database of breast histopathological WSIs. The results demonstrate that for one WSI, our region proposing method can generate 7.3 thousand contoured regions which fit well with 95.8% of the ROIs annotated by pathologists. The proposed hashing method can retrieve a query image among 136 thousand images in 0.29 s and reach precision of 91% with only 10% of images labeled. CONCLUSIONS The unsupervised region proposing method can generate regions as predictions of lesions in histopathological WSI. The region proposals can also serve as the training samples to train machine-learning models for image retrieval. The proposed hashing method can achieve fast and precise image retrieval with small amount of labels. Furthermore, the proposed methods can be potentially applied in online computer-aided-diagnosis systems.
Collapse
Affiliation(s)
- Yibing Ma
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Beijing Key Laboratory of Digital Media, Beijing 100191, China.
| | - Zhiguo Jiang
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Beijing Key Laboratory of Digital Media, Beijing 100191, China.
| | - Haopeng Zhang
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Beijing Key Laboratory of Digital Media, Beijing 100191, China.
| | - Fengying Xie
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Beijing Key Laboratory of Digital Media, Beijing 100191, China.
| | - Yushan Zheng
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Beijing Key Laboratory of Digital Media, Beijing 100191, China.
| | - Huaqiang Shi
- Motic (Xiamen) Medical Diagnostic Systems Co. Ltd., Xiamen 361101, China; People's Liberation Army Air Force General Hospital, Beijing 100142, China.
| | - Yu Zhao
- Motic (Xiamen) Medical Diagnostic Systems Co. Ltd., Xiamen 361101, China.
| | - Jun Shi
- School of Software, Hefei University of Technology, Hefei 230601, China.
| |
Collapse
|
61
|
Heinrich MP, Blendowski M, Oktay O. TernaryNet: faster deep model inference without GPUs for medical 3D segmentation using sparse and binary convolutions. Int J Comput Assist Radiol Surg 2018; 13:1311-1320. [PMID: 29850978 DOI: 10.1007/s11548-018-1797-4] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 05/21/2018] [Indexed: 10/16/2022]
Abstract
PURPOSE Deep convolutional neural networks (DCNN) are currently ubiquitous in medical imaging. While their versatility and high-quality results for common image analysis tasks including segmentation, localisation and prediction is astonishing, the large representational power comes at the cost of highly demanding computational effort. This limits their practical applications for image-guided interventions and diagnostic (point-of-care) support using mobile devices without graphics processing units (GPU). METHODS We propose a new scheme that approximates both trainable weights and neural activations in deep networks by ternary values and tackles the open question of backpropagation when dealing with non-differentiable functions. Our solution enables the removal of the expensive floating-point matrix multiplications throughout any convolutional neural network and replaces them by energy- and time-preserving binary operators and population counts. RESULTS We evaluate our approach for the segmentation of the pancreas in CT. Here, our ternary approximation within a fully convolutional network leads to more than 90% memory reductions and high accuracy (without any post-processing) with a Dice overlap of 71.0% that comes close to the one obtained when using networks with high-precision weights and activations. We further provide a concept for sub-second inference without GPUs and demonstrate significant improvements in comparison with binary quantisation and without our proposed ternary hyperbolic tangent continuation. CONCLUSIONS We present a key enabling technique for highly efficient DCNN inference without GPUs that will help to bring the advances of deep learning to practical clinical applications. It has also great promise for improving accuracies in large-scale medical data retrieval.
Collapse
Affiliation(s)
- Mattias P Heinrich
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany.
| | - Max Blendowski
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
| | - Ozan Oktay
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, SW7 2AZ, UK
| |
Collapse
|
62
|
Muramatsu C. Overview on subjective similarity of images for content-based medical image retrieval. Radiol Phys Technol 2018; 11:109-124. [PMID: 29740749 DOI: 10.1007/s12194-018-0461-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2018] [Accepted: 04/28/2018] [Indexed: 12/18/2022]
Abstract
Computer-aided diagnosis systems for assisting the classification of various diseases have the potential to improve radiologists' diagnostic accuracy and efficiency, as reported in several studies. Conventional systems generally provide the probabilities of disease types in terms of numerical values, a method that may not be efficient for radiologists who are trained by reading a large number of images. Presentation of reference images similar to those of a new case being diagnosed can supplement the probability outputs based on computerized analysis as an intuitive guide, and it can assist radiologists in their diagnosis, reporting, and treatment planning. Many studies on content-based medical image retrievals have been reported on. For retrieval of perceptually similar and diagnostically relevant images, incorporation of perceptual similarity data by radiologists has been suggested. In this paper, studies on image retrieval methods are reviewed with a special focus on quantification, utilization, and the evaluation of subjective similarities between pairs of images.
Collapse
Affiliation(s)
- Chisako Muramatsu
- Department of Electrical, Electronic and Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu, 501-1194, Japan.
| |
Collapse
|
63
|
Komura D, Ishikawa S. Machine Learning Methods for Histopathological Image Analysis. Comput Struct Biotechnol J 2018; 16:34-42. [PMID: 30275936 PMCID: PMC6158771 DOI: 10.1016/j.csbj.2018.01.001] [Citation(s) in RCA: 378] [Impact Index Per Article: 54.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Revised: 12/03/2017] [Accepted: 01/14/2018] [Indexed: 12/12/2022] Open
Abstract
Abundant accumulation of digital histopathological images has led to the increased demand for their analysis, such as computer-aided diagnosis using machine learning techniques. However, digital pathological images and related tasks have some issues to be considered. In this mini-review, we introduce the application of digital pathological image analysis using machine learning algorithms, address some problems specific to such analysis, and propose possible solutions.
Collapse
Affiliation(s)
- Daisuke Komura
- Department of Genomic Pathology, Medical Research Institute, Tokyo Medical and Dental University, Tokyo, Japan
| | | |
Collapse
|
64
|
Kalantari A, Kamsin A, Shamshirband S, Gani A, Alinejad-Rokny H, Chronopoulos AT. Computational intelligence approaches for classification of medical data: State-of-the-art, future challenges and research directions. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.01.126] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
65
|
Shi X, Xing F, Xu K, Xie Y, Su H, Yang L. Supervised graph hashing for histopathology image retrieval and classification. Med Image Anal 2017; 42:117-128. [DOI: 10.1016/j.media.2017.07.009] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2016] [Revised: 07/25/2017] [Accepted: 07/31/2017] [Indexed: 10/19/2022]
|
66
|
Li Z, Zhang X, Müller H, Zhang S. Large-scale retrieval for medical image analytics: A comprehensive review. Med Image Anal 2017; 43:66-84. [PMID: 29031831 DOI: 10.1016/j.media.2017.09.007] [Citation(s) in RCA: 75] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2017] [Revised: 08/01/2017] [Accepted: 09/29/2017] [Indexed: 12/27/2022]
Abstract
Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis.
Collapse
Affiliation(s)
- Zhongyu Li
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
| | - Xiaofan Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
| | - Henning Müller
- Information Systems Institute, HES-SO Valais, Sierre, Switzerland
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA.
| |
Collapse
|
67
|
Lan R, Zhou Y. Medical Image Retrieval via Histogram of Compressed Scattering Coefficients. IEEE J Biomed Health Inform 2017; 21:1338-1346. [DOI: 10.1109/jbhi.2016.2623840] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
68
|
Shi J, Wu J, Li Y, Zhang Q, Ying S. Histopathological Image Classification With Color Pattern Random Binary Hashing-Based PCANet and Matrix-Form Classifier. IEEE J Biomed Health Inform 2017; 21:1327-1337. [DOI: 10.1109/jbhi.2016.2602823] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
69
|
Ahmad J, Sajjad M, Mehmood I, Baik SW. SiNC: Saliency-injected neural codes for representation and efficient retrieval of medical radiographs. PLoS One 2017; 12:e0181707. [PMID: 28771497 PMCID: PMC5542646 DOI: 10.1371/journal.pone.0181707] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2016] [Accepted: 07/06/2017] [Indexed: 01/10/2023] Open
Abstract
Medical image collections contain a wealth of information which can assist radiologists and medical experts in diagnosis and disease detection for making well-informed decisions. However, this objective can only be realized if efficient access is provided to semantically relevant cases from the ever-growing medical image repositories. In this paper, we present an efficient method for representing medical images by incorporating visual saliency and deep features obtained from a fine-tuned convolutional neural network (CNN) pre-trained on natural images. Saliency detector is employed to automatically identify regions of interest like tumors, fractures, and calcified spots in images prior to feature extraction. Neuronal activation features termed as neural codes from different CNN layers are comprehensively studied to identify most appropriate features for representing radiographs. This study revealed that neural codes from the last fully connected layer of the fine-tuned CNN are found to be the most suitable for representing medical images. The neural codes extracted from the entire image and salient part of the image are fused to obtain the saliency-injected neural codes (SiNC) descriptor which is used for indexing and retrieval. Finally, locality sensitive hashing techniques are applied on the SiNC descriptor to acquire short binary codes for allowing efficient retrieval in large scale image collections. Comprehensive experimental evaluations on the radiology images dataset reveal that the proposed framework achieves high retrieval accuracy and efficiency for scalable image retrieval applications and compares favorably with existing approaches.
Collapse
Affiliation(s)
- Jamil Ahmad
- College of Software and Convergence Technology, Department of Software, Sejong University, Seoul, Republic of Korea
| | - Muhammad Sajjad
- Digital Image Processing Lab, Department of Computer Science, Islamia College, Peshawar, Pakistan
| | - Irfan Mehmood
- Department of Computer Science and Engineering, Sejong University, Seoul, Republic of Korea
| | - Sung Wook Baik
- College of Software and Convergence Technology, Department of Software, Sejong University, Seoul, Republic of Korea
| |
Collapse
|
70
|
Ma Y, Jiang Z, Zhang H, Xie F, Zheng Y, Shi H, Zhao Y. Breast Histopathological Image Retrieval Based on Latent Dirichlet Allocation. IEEE J Biomed Health Inform 2017; 21:1114-1123. [DOI: 10.1109/jbhi.2016.2611615] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
71
|
Tan C, Li K, Yan Z, Yi J, Wu P, Yu HJ, Engelke K, Metaxas DN. Towards large-scale MR thigh image analysis via an integrated quantification framework. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.05.108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
72
|
Pan X, Li L, Yang H, Liu Z, Yang J, Zhao L, Fan Y. Accurate segmentation of nuclei in pathological images via sparse reconstruction and deep convolutional networks. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.08.103] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
73
|
Xu Y, Shen F, Xu X, Gao L, Wang Y, Tan X. Large-scale image retrieval with supervised sparse hashing. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.05.109] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
74
|
Wan T, Cao J, Chen J, Qin Z. Automated grading of breast cancer histopathology using cascaded ensemble with combination of multi-level image features. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.05.084] [Citation(s) in RCA: 72] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
75
|
Li Z, Metaxas DN, Lu A, Zhang S. Interactive Exploration for Continuously Expanding Neuron Databases. Methods 2017; 115:100-109. [DOI: 10.1016/j.ymeth.2017.02.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2016] [Revised: 02/15/2017] [Accepted: 02/16/2017] [Indexed: 01/02/2023] Open
|
76
|
Conjeti S, Katouzian A, Kazi A, Mesbah S, Beymer D, Syeda-Mahmood TF, Navab N. Metric hashing forests. Med Image Anal 2016; 34:13-29. [DOI: 10.1016/j.media.2016.05.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2016] [Revised: 04/11/2016] [Accepted: 05/24/2016] [Indexed: 10/21/2022]
|
77
|
Liu X, Huang L, Deng C, Lang B, Tao D. Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:4514-4524. [PMID: 27448359 DOI: 10.1109/tip.2016.2593344] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods.
Collapse
|
78
|
Zhang X, Dou H, Ju T, Xu J, Zhang S. Fusing Heterogeneous Features From Stacked Sparse Autoencoder for Histopathological Image Analysis. IEEE J Biomed Health Inform 2016; 20:1377-83. [DOI: 10.1109/jbhi.2015.2461671] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
79
|
Jiang M, Zhang S, Huang J, Yang L, Metaxas DN. Scalable histopathological image analysis via supervised hashing with multiple features. Med Image Anal 2016; 34:3-12. [PMID: 27521299 DOI: 10.1016/j.media.2016.07.011] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2016] [Revised: 04/08/2016] [Accepted: 07/28/2016] [Indexed: 11/18/2022]
Abstract
Histopathology is crucial to diagnosis of cancer, yet its interpretation is tedious and challenging. To facilitate this procedure, content-based image retrieval methods have been developed as case-based reasoning tools. Especially, with the rapid growth of digital histopathology, hashing-based retrieval approaches are gaining popularity due to their exceptional efficiency and scalability. Nevertheless, few hashing-based histopathological image analysis methods perform feature fusion, despite the fact that it is a common practice to improve image retrieval performance. In response, we exploit joint kernel-based supervised hashing (JKSH) to integrate complementary features in a hashing framework. Specifically, hashing functions are designed based on linearly combined kernel functions associated with individual features. Supervised information is incorporated to bridge the semantic gap between low-level features and high-level diagnosis. An alternating optimization method is utilized to learn the kernel combination and hashing functions. The obtained hashing functions compress multiple high-dimensional features into tens of binary bits, enabling fast retrieval from a large database. Our approach is extensively validated on 3121 breast-tissue histopathological images by distinguishing between actionable and benign cases. It achieves 88.1% retrieval precision and 91.3% classification accuracy within 16.5 ms query time, comparing favorably with traditional methods.
Collapse
Affiliation(s)
- Menglin Jiang
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA.
| | - Junzhou Huang
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76019, USA
| | - Lin Yang
- Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Dimitris N Metaxas
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA
| |
Collapse
|
80
|
Zhang S, Metaxas D. Large-Scale medical image analytics: Recent methodologies, applications and Future directions. Med Image Anal 2016; 33:98-101. [PMID: 27503077 DOI: 10.1016/j.media.2016.06.010] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Revised: 06/06/2016] [Accepted: 06/13/2016] [Indexed: 11/29/2022]
Abstract
Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion.
Collapse
Affiliation(s)
- Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28223, USA
| | - Dimitris Metaxas
- Department of Computer Science, Rutgers University, Piscataway, NJ, 08854, USA.
| |
Collapse
|
81
|
Sparks R, Madabhushi A. Out-of-Sample Extrapolation utilizing Semi-Supervised Manifold Learning (OSE-SSL): Content Based Image Retrieval for Histopathology Images. Sci Rep 2016; 6:27306. [PMID: 27264985 PMCID: PMC4893667 DOI: 10.1038/srep27306] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2015] [Accepted: 05/16/2016] [Indexed: 12/22/2022] Open
Abstract
Content-based image retrieval (CBIR) retrieves database images most similar to the query image by (1) extracting quantitative image descriptors and (2) calculating similarity between database and query image descriptors. Recently, manifold learning (ML) has been used to perform CBIR in a low dimensional representation of the high dimensional image descriptor space to avoid the curse of dimensionality. ML schemes are computationally expensive, requiring an eigenvalue decomposition (EVD) for every new query image to learn its low dimensional representation. We present out-of-sample extrapolation utilizing semi-supervised ML (OSE-SSL) to learn the low dimensional representation without recomputing the EVD for each query image. OSE-SSL incorporates semantic information, partial class label, into a ML scheme such that the low dimensional representation co-localizes semantically similar images. In the context of prostate histopathology, gland morphology is an integral component of the Gleason score which enables discrimination between prostate cancer aggressiveness. Images are represented by shape features extracted from the prostate gland. CBIR with OSE-SSL for prostate histology obtained from 58 patient studies, yielded an area under the precision recall curve (AUPRC) of 0.53 ± 0.03 comparatively a CBIR with Principal Component Analysis (PCA) to learn a low dimensional space yielded an AUPRC of 0.44 ± 0.01.
Collapse
Affiliation(s)
- Rachel Sparks
- University College of London, Centre for Medical Image Computing, London, UK
| | - Anant Madabhushi
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, OH, USA
| |
Collapse
|
82
|
An Improved CAD System for Breast Cancer Diagnosis Based on Generalized Pseudo-Zernike Moment and Ada-DEWNN Classifier. J Med Syst 2016; 40:105. [PMID: 26892455 DOI: 10.1007/s10916-016-0454-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2015] [Accepted: 01/29/2016] [Indexed: 10/22/2022]
Abstract
In this paper, a novel framework of computer-aided diagnosis (CAD) system has been presented for the classification of benign/malignant breast tissues. The properties of the generalized pseudo-Zernike moments (GPZM) and pseudo-Zernike moments (PZM) are utilized as suitable texture descriptors of the suspicious region in the mammogram. An improved classifier- adaptive differential evolution wavelet neural network (Ada-DEWNN) is proposed to improve the classification accuracy of the CAD system. The efficiency of the proposed system is tested on mammograms from the Mammographic Image Analysis Society (mini-MIAS) database using the leave-one-out cross validation as well as on mammograms from the Digital Database for Screening Mammography (DDSM) database using 10-fold cross validation. The proposed method on MIAS-database attains a fair accuracy of 0.8938 and AUC of 0.935 (95 % CI = 0.8213-0.9831). The proposed method is also tested for in-plane rotation and found to be highly rotation invariant. In addition, the proposed classifier is tested and compared with some well-known existing methods using receiver operating characteristic (ROC) analysis using DDSM- database. It is concluded the proposed classifier has better area under the curve (AUC) (0.9289) and highly précised with 95 % CI, 0.8216 to 0.9834 and 0.0384 standard error.
Collapse
|
83
|
Xu J, Luo X, Wang G, Gilmore H, Madabhushi A. A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images. Neurocomputing 2016; 191:214-223. [PMID: 28154470 PMCID: PMC5283391 DOI: 10.1016/j.neucom.2016.01.034] [Citation(s) in RCA: 224] [Impact Index Per Article: 24.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Epithelial (EP) and stromal (ST) are two types of tissues in histological images. Automated segmentation or classification of EP and ST tissues is important when developing computerized system for analyzing the tumor microenvironment. In this paper, a Deep Convolutional Neural Networks (DCNN) based feature learning is presented to automatically segment or classify EP and ST regions from digitized tumor tissue microarrays (TMAs). Current approaches are based on handcraft feature representation, such as color, texture, and Local Binary Patterns (LBP) in classifying two regions. Compared to handcrafted feature based approaches, which involve task dependent representation, DCNN is an end-to-end feature extractor that may be directly learned from the raw pixel intensity value of EP and ST tissues in a data driven fashion. These high-level features contribute to the construction of a supervised classifier for discriminating the two types of tissues. In this work we compare DCNN based models with three handcraft feature extraction based approaches on two different datasets which consist of 157 Hematoxylin and Eosin (H&E) stained images of breast cancer and 1376 immunohistological (IHC) stained images of colorectal cancer, respectively. The DCNN based feature learning approach was shown to have a F1 classification score of 85%, 89%, and 100%, accuracy (ACC) of 84%, 88%, and 100%, and Matthews Correlation Coefficient (MCC) of 86%, 77%, and 100% on two H&E stained (NKI and VGH) and IHC stained data, respectively. Our DNN based approach was shown to outperform three handcraft feature extraction based approaches in terms of the classification of EP and ST regions.
Collapse
Affiliation(s)
- Jun Xu
- Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Xiaofei Luo
- Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Guanhao Wang
- Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Hannah Gilmore
- Institute for Pathology, University Hospitals Case Medical Center, Case Western Reserve University, OH 44106-7207, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, OH 44106, USA
| |
Collapse
|
84
|
Xing F, Yang L. Robust Nucleus/Cell Detection and Segmentation in Digital Pathology and Microscopy Images: A Comprehensive Review. IEEE Rev Biomed Eng 2016; 9:234-63. [PMID: 26742143 PMCID: PMC5233461 DOI: 10.1109/rbme.2016.2515127] [Citation(s) in RCA: 219] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Digital pathology and microscopy image analysis is widely used for comprehensive studies of cell morphology or tissue structure. Manual assessment is labor intensive and prone to interobserver variations. Computer-aided methods, which can significantly improve the objectivity and reproducibility, have attracted a great deal of interest in recent literature. Among the pipeline of building a computer-aided diagnosis system, nucleus or cell detection and segmentation play a very important role to describe the molecular morphological information. In the past few decades, many efforts have been devoted to automated nucleus/cell detection and segmentation. In this review, we provide a comprehensive summary of the recent state-of-the-art nucleus/cell segmentation approaches on different types of microscopy images including bright-field, phase-contrast, differential interference contrast, fluorescence, and electron microscopies. In addition, we discuss the challenges for the current methods and the potential future work of nucleus/cell detection and segmentation.
Collapse
|
85
|
Zhang X, Xing F, Su H, Yang L, Zhang S. High-throughput histopathological image analysis via robust cell segmentation and hashing. Med Image Anal 2015; 26:306-15. [PMID: 26599156 PMCID: PMC4679540 DOI: 10.1016/j.media.2015.10.005] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Revised: 05/13/2015] [Accepted: 10/16/2015] [Indexed: 11/27/2022]
Abstract
Computer-aided diagnosis of histopathological images usually requires to examine all cells for accurate diagnosis. Traditional computational methods may have efficiency issues when performing cell-level analysis. In this paper, we propose a robust and scalable solution to enable such analysis in a real-time fashion. Specifically, a robust segmentation method is developed to delineate cells accurately using Gaussian-based hierarchical voting and repulsive balloon model. A large-scale image retrieval approach is also designed to examine and classify each cell of a testing image by comparing it with a massive database, e.g., half-million cells extracted from the training dataset. We evaluate this proposed framework on a challenging and important clinical use case, i.e., differentiation of two types of lung cancers (the adenocarcinoma and squamous carcinoma), using thousands of lung microscopic tissue images extracted from hundreds of patients. Our method has achieved promising accuracy and running time by searching among half-million cells .
Collapse
Affiliation(s)
- Xiaofan Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
| | - Fuyong Xing
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Hai Su
- Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Lin Yang
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA; Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA.
| |
Collapse
|
86
|
|
87
|
Gao Y, Adeli-M E, Kim M, Giannakopoulos P, Haller S, Shen D. Medical Image Retrieval Using Multi-graph Learning for MCI Diagnostic Assistance. ACTA ACUST UNITED AC 2015; 9350:86-93. [PMID: 27054200 DOI: 10.1007/978-3-319-24571-3_11] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2023]
Abstract
Alzheimer's disease (AD) is an irreversible neurodegenerative disorder that can lead to progressive memory loss and cognition impairment. Therefore, diagnosing AD during the risk stage, a.k.a. Mild Cognitive Impairment (MCI), has attracted ever increasing interest. Besides the automated diagnosis of MCI, it is important to provide physicians with related MCI cases with visually similar imaging data for case-based reasoning or evidence-based medicine in clinical practices. To this end, we propose a multi-graph learning based medical image retrieval technique for MCI diagnostic assistance. Our method is comprised of two stages, the query category prediction and ranking. In the first stage, the query is formulated into a multi-graph structure with a set of selected subjects in the database to learn the relevance between the query subject and the existing subject categories through learning the multi-graph combination weights. This predicts the category that the query belongs to, based on which a set of subjects in the database are selected as candidate retrieval results. In the second stage, the relationship between these candidates and the query is further learned with a new multi-graph, which is used to rank the candidates. The returned subjects can be demonstrated to physicians as reference cases for MCI diagnosing. We evaluated the proposed method on a cohort of 60 consecutive MCI subjects and 350 normal controls with MRI data under three imaging parameters: T1 weighted imaging (T1), Diffusion Tensor Imaging (DTI) and Arterial Spin Labeling (ASL). The proposed method can achieve average 3.45 relevant samples in top 5 returned results, which significantly outperforms the baseline methods compared.
Collapse
Affiliation(s)
- Yue Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Ehsan Adeli-M
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Minjeong Kim
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
| | | | - Sven Haller
- Department of Neuroradiology, University Hospitals of Geneva and Faculty of Medicine of the University of Geneva, Switzerland
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
| |
Collapse
|
88
|
Abstract
Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.
Collapse
|
89
|
Su H, Xing F, Kong X, Xie Y, Zhang S, Yang L. Robust Cell Detection and Segmentation in Histopathological Images Using Sparse Reconstruction and Stacked Denoising Autoencoders. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2015; 9351:383-390. [PMID: 27796013 PMCID: PMC5081214 DOI: 10.1007/978-3-319-24574-4_46] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
Computer-aided diagnosis (CAD) is a promising tool for accurate and consistent diagnosis and prognosis. Cell detection and segmentation are essential steps for CAD. These tasks are challenging due to variations in cell shapes, touching cells, and cluttered background. In this paper, we present a cell detection and segmentation algorithm using the sparse reconstruction with trivial templates and a stacked denoising autoencoder (sDAE). The sparse reconstruction handles the shape variations by representing a testing patch as a linear combination of shapes in the learned dictionary. Trivial templates are used to model the touching parts. The sDAE, trained with the original data and their structured labels, is used for cell segmentation. To the best of our knowledge, this is the first study to apply sparse reconstruction and sDAE with structured labels for cell detection and segmentation. The proposed method is extensively tested on two data sets containing more than 3000 cells obtained from brain tumor and lung cancer images. Our algorithm achieves the best performance compared with other state of the arts.
Collapse
Affiliation(s)
- Hai Su
- J. Crayton Pruitt Family Dept. of Biomedical Engineering, University of Florida, FL 32611
| | - Fuyong Xing
- Department of Electrical and Computer Engineering, University of Florida, FL 32611
| | - Xiangfei Kong
- J. Crayton Pruitt Family Dept. of Biomedical Engineering, University of Florida, FL 32611
| | - Yuanpu Xie
- J. Crayton Pruitt Family Dept. of Biomedical Engineering, University of Florida, FL 32611
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, NC 28223
| | - Lin Yang
- J. Crayton Pruitt Family Dept. of Biomedical Engineering, University of Florida, FL 32611; Department of Electrical and Computer Engineering, University of Florida, FL 32611
| |
Collapse
|
90
|
Zheng Y, Wei B, Liu H, Xiao R, Gee JC. Measuring sparse temporal-variation for accurate registration of dynamic contrast-enhanced breast MR images. Comput Med Imaging Graph 2015; 46 Pt 1:73-80. [PMID: 26183649 DOI: 10.1016/j.compmedimag.2015.05.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2015] [Revised: 04/02/2015] [Accepted: 05/12/2015] [Indexed: 10/23/2022]
Abstract
Accurate registration of dynamic contrast-enhanced (DCE) MR breast images is challenging due to the temporal variations of image intensity and the non-rigidity of breast motion. The former can cause the well-known tumor shrinking/expanding problem in registration process while the latter complicates the task by requiring an estimation of non-rigid deformation. In this paper, we treat the intensity's temporal variations as "corruptions" which spatially distribute in a sparse pattern and model them with a L1 norm and a Lorentzian norm. We show that these new image similarity measurements can characterize the non-Gaussian property of the difference between the pre-contrast and post-contrast images and help to resolve the shrinking/expanding problem by forgiving significant image variations. Furthermore, we propose an iteratively re-weighted least squares based method and a linear programming based technique for optimizing the objective functions obtained using these two novel norms. We show that these optimization techniques outperform the traditional gradient-descent approach. Experimental results with sequential DCE-MR images from 28 patients show the superior performances of our algorithms.
Collapse
Affiliation(s)
- Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, China; Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA.
| | - Benzheng Wei
- College of Science and Engineering, Shandong University of Traditional Chinese Medicine, Jinan, Shandong, China
| | - Hui Liu
- Department of Electronic Engineering, Dalian University of Technology, Dalian, Liaoning, China
| | - Rui Xiao
- Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | - James C Gee
- Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
91
|
Xu J, Xiang L, Wang G, Ganesan S, Feldman M, Shih NN, Gilmore H, Madabhushi A. Sparse Non-negative Matrix Factorization (SNMF) based color unmixing for breast histopathological image analysis. Comput Med Imaging Graph 2015; 46 Pt 1:20-29. [PMID: 25958195 DOI: 10.1016/j.compmedimag.2015.04.002] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2015] [Revised: 04/06/2015] [Accepted: 04/12/2015] [Indexed: 12/14/2022]
Abstract
Color deconvolution has emerged as a popular method for color unmixing as a pre-processing step for image analysis of digital pathology images. One deficiency of this approach is that the stain matrix is pre-defined which requires specific knowledge of the data. This paper presents an unsupervised Sparse Non-negative Matrix Factorization (SNMF) based approach for color unmixing. We evaluate this approach for color unmixing of breast pathology images. Compared to Non-negative Matrix Factorization (NMF), the sparseness constraint imposed on coefficient matrix aims to use more meaningful representation of color components for separating stained colors. In this work SNMF is leveraged for decomposing pure stained color in both Immunohistochemistry (IHC) and Hematoxylin and Eosin (H&E) images. SNMF is compared with Principle Component Analysis (PCA), Independent Component Analysis (ICA), Color Deconvolution (CD), and Non-negative Matrix Factorization (NMF) based approaches. SNMF demonstrated improved performance in decomposing brown diaminobenzidine (DAB) component from 36 IHC images as well as accurately segmenting about 1400 nuclei and 500 lymphocytes from H & E images.
Collapse
Affiliation(s)
- Jun Xu
- Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing University of Information Science and Technology, Nanjing 210044, China; CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China.
| | - Lei Xiang
- Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing University of Information Science and Technology, Nanjing 210044, China; CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Guanhao Wang
- Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing University of Information Science and Technology, Nanjing 210044, China; CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | | | - Michael Feldman
- Department of Pathology, Hospital of the University of Pennsylvania, PA 19104, USA
| | - Natalie Nc Shih
- Department of Pathology, Hospital of the University of Pennsylvania, PA 19104, USA
| | - Hannah Gilmore
- Institute for Pathology, University Hospitals Case Medical Center, Case Western Reserve University, OH 44106-7207, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, OH 44106, USA
| |
Collapse
|
92
|
Group sparsity model for stain unmixing in brightfield multiplex immunohistochemistry images. Comput Med Imaging Graph 2015; 46 Pt 1:30-39. [PMID: 25920325 DOI: 10.1016/j.compmedimag.2015.04.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2014] [Revised: 04/02/2015] [Accepted: 04/03/2015] [Indexed: 11/23/2022]
Abstract
Multiplex immunohistochemistry (IHC) staining is a new, emerging technique for the detection of multiple biomarkers within a single tissue section. The initial key step in multiplex IHC image analysis in digital pathology is of tremendous clinical importance due to its ability to accurately unmix the IHC image and differentiate each of the stains. The technique has become popular due to its significant efficiency and the rich diagnostic information it contains. The intriguing task of unmixing a three-channel CCD color camera acquired RGB image into more than three colors is very challenging, and to the best of our knowledge, hardly studied in academic literature. This paper presents a novel stain unmixing algorithm for brightfield multiplex IHC images based on a group sparsity model. The proposed framework achieves robust unmixing for more than three chromogenic dyes while preserving the biological constraints of the biomarkers. Typically, a number of biomarkers co-localize in the same cell parts named priori. With this biological information in mind, the number of stains at one pixel therefore has a fixed up-bound, i.e. equivalent to the number of co-localized biomarkers. By leveraging the group sparsity model, the fractions of stain contributions from the co-localized biomarkers are explicitly modeled into one group to yield the least square solution within the group. A sparse solution is obtained among the groups since ideally only one group of biomarkers is present at each pixel. The algorithm is evaluated on both synthetic and clinical data sets, and demonstrates better unmixing results than the existing strategies.
Collapse
|
93
|
Jiang M, Zhang S, Li H, Metaxas DN. Computer-aided diagnosis of mammographic masses using scalable image retrieval. IEEE Trans Biomed Eng 2014; 62:783-92. [PMID: 25361497 DOI: 10.1109/tbme.2014.2365494] [Citation(s) in RCA: 86] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Computer-aided diagnosis of masses in mammograms is important to the prevention of breast cancer. Many approaches tackle this problem through content-based image retrieval techniques. However, most of them fall short of scalability in the retrieval stage, and their diagnostic accuracy is, therefore, restricted. To overcome this drawback, we propose a scalable method for retrieval and diagnosis of mammographic masses. Specifically, for a query mammographic region of interest (ROI), scale-invariant feature transform (SIFT) features are extracted and searched in a vocabulary tree, which stores all the quantized features of previously diagnosed mammographic ROIs. In addition, to fully exert the discriminative power of SIFT features, contextual information in the vocabulary tree is employed to refine the weights of tree nodes. The retrieved ROIs are then used to determine whether the query ROI contains a mass. The presented method has excellent scalability due to the low spatial-temporal cost of vocabulary tree. Extensive experiments are conducted on a large dataset of 11 553 ROIs extracted from the digital database for screening mammography, which demonstrate the accuracy and scalability of our approach.
Collapse
|