301
|
Xiang L, Qiao Y, Nie D, An L, Wang Q, Shen D. Deep Auto-context Convolutional Neural Networks for Standard-Dose PET Image Estimation from Low-Dose PET/MRI. Neurocomputing 2017; 267:406-416. [PMID: 29217875 PMCID: PMC5714510 DOI: 10.1016/j.neucom.2017.06.048] [Citation(s) in RCA: 167] [Impact Index Per Article: 20.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Positron emission tomography (PET) is an essential technique in many clinical applications such as tumor detection and brain disorder diagnosis. In order to obtain high-quality PET images, a standard-dose radioactive tracer is needed, which inevitably causes the risk of radiation exposure damage. For reducing the patient's exposure to radiation and maintaining the high quality of PET images, in this paper, we propose a deep learning architecture to estimate the high-quality standard-dose PET (SPET) image from the combination of the low-quality low-dose PET (LPET) image and the accompanying T1-weighted acquisition from magnetic resonance imaging (MRI). Specifically, we adapt the convolutional neural network (CNN) to account for the two channel inputs of LPET and T1, and directly learn the end-to-end mapping between the inputs and the SPET output. Then, we integrate multiple CNN modules following the auto-context strategy, such that the tentatively estimated SPET of an early CNN can be iteratively refined by subsequent CNNs. Validations on real human brain PET/MRI data show that our proposed method can provide competitive estimation quality of the PET images, compared to the state-of-the-art methods. Meanwhile, our method is highly efficient to test on a new subject, e.g., spending ~2 seconds for estimating an entire SPET image in contrast to ~16 minutes by the state-of-the-art method. The results above demonstrate the potential of our method in real clinical applications.
Collapse
Affiliation(s)
- Lei Xiang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yu Qiao
- Shenzhen key lab of Comp. Vis. & Pat. Rec., Shenzhen Institutes of Advanced Technology, CAS, Shenzhen, China
| | - Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Le An
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Qian Wang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
302
|
Wang H, Zhou Z, Li Y, Chen Z, Lu P, Wang W, Liu W, Yu L. Comparison of machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer from 18F-FDG PET/CT images. EJNMMI Res 2017; 7:11. [PMID: 28130689 PMCID: PMC5272853 DOI: 10.1186/s13550-017-0260-9] [Citation(s) in RCA: 143] [Impact Index Per Article: 17.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2016] [Accepted: 01/19/2017] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND This study aimed to compare one state-of-the-art deep learning method and four classical machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer (NSCLC) from 18F-FDG PET/CT images. Another objective was to compare the discriminative power of the recently popular PET/CT texture features with the widely used diagnostic features such as tumor size, CT value, SUV, image contrast, and intensity standard deviation. The four classical machine learning methods included random forests, support vector machines, adaptive boosting, and artificial neural network. The deep learning method was the convolutional neural networks (CNN). The five methods were evaluated using 1397 lymph nodes collected from PET/CT images of 168 patients, with corresponding pathology analysis results as gold standard. The comparison was conducted using 10 times 10-fold cross-validation based on the criterion of sensitivity, specificity, accuracy (ACC), and area under the ROC curve (AUC). For each classical method, different input features were compared to select the optimal feature set. Based on the optimal feature set, the classical methods were compared with CNN, as well as with human doctors from our institute. RESULTS For the classical methods, the diagnostic features resulted in 81~85% ACC and 0.87~0.92 AUC, which were significantly higher than the results of texture features. CNN's sensitivity, specificity, ACC, and AUC were 84, 88, 86, and 0.91, respectively. There was no significant difference between the results of CNN and the best classical method. The sensitivity, specificity, and ACC of human doctors were 73, 90, and 82, respectively. All the five machine learning methods had higher sensitivities but lower specificities than human doctors. CONCLUSIONS The present study shows that the performance of CNN is not significantly different from the best classical methods and human doctors for classifying mediastinal lymph node metastasis of NSCLC from PET/CT images. Because CNN does not need tumor segmentation or feature calculation, it is more convenient and more objective than the classical methods. However, CNN does not make use of the import diagnostic features, which have been proved more discriminative than the texture features for classifying small-sized lymph nodes. Therefore, incorporating the diagnostic features into CNN is a promising direction for future research.
Collapse
Affiliation(s)
- Hongkai Wang
- Department of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, No. 2 Linggong Street, Ganjingzi District, Dalian, Liaoning, 116024, China
| | - Zongwei Zhou
- Department of Biomedical Informatics and the College of Health Solutions, Arizona State University, 13212 East Shea Boulevard, Scottsdale, AZ, 85259, USA
| | - Yingci Li
- Center of PET/CT, The Affiliated Tumor Hospital of Harbin Medical University, 150 Haping Road, Nangang District, Harbin, Heilongjiang Province, 150081, China
| | - Zhonghua Chen
- Department of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, No. 2 Linggong Street, Ganjingzi District, Dalian, Liaoning, 116024, China
| | - Peiou Lu
- Center of PET/CT, The Affiliated Tumor Hospital of Harbin Medical University, 150 Haping Road, Nangang District, Harbin, Heilongjiang Province, 150081, China
| | - Wenzhi Wang
- Center of PET/CT, The Affiliated Tumor Hospital of Harbin Medical University, 150 Haping Road, Nangang District, Harbin, Heilongjiang Province, 150081, China
| | - Wanyu Liu
- HIT-INSA Sino French Research Centre for Biomedical Imaging, Harbin Institute of Technology, Harbin, Heilongjiang, 150001, China
| | - Lijuan Yu
- Center of PET/CT, The Affiliated Tumor Hospital of Harbin Medical University, 150 Haping Road, Nangang District, Harbin, Heilongjiang Province, 150081, China.
| |
Collapse
|
303
|
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42:60-88. [PMID: 28778026 DOI: 10.1016/j.media.2017.07.005] [Citation(s) in RCA: 4766] [Impact Index Per Article: 595.8] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 07/24/2017] [Accepted: 07/25/2017] [Indexed: 02/07/2023]
Affiliation(s)
- Geert Litjens
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Thijs Kooi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | | | - Francesco Ciompi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Mohsen Ghafoorian
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
304
|
Chen H, Zhang Y, Kalra MK, Lin F, Chen Y, Liao P, Zhou J, Wang G. Low-Dose CT With a Residual Encoder-Decoder Convolutional Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:2524-2535. [PMID: 28622671 PMCID: PMC5727581 DOI: 10.1109/tmi.2017.2715284] [Citation(s) in RCA: 688] [Impact Index Per Article: 86.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Given the potential risk of X-ray radiation to the patient, low-dose CT has attracted a considerable interest in the medical imaging field. Currently, the main stream low-dose CT methods include vendor-specific sinogram domain filtration and iterative reconstruction algorithms, but they need to access raw data, whose formats are not transparent to most users. Due to the difficulty of modeling the statistical characteristics in the image domain, the existing methods for directly processing reconstructed images cannot eliminate image noise very well while keeping structural details. Inspired by the idea of deep learning, here we combine the autoencoder, deconvolution network, and shortcut connections into the residual encoder-decoder convolutional neural network (RED-CNN) for low-dose CT imaging. After patch-based training, the proposed RED-CNN achieves a competitive performance relative to the-state-of-art methods in both simulated and clinical cases. Especially, our method has been favorably evaluated in terms of noise suppression, structural preservation, and lesion detection.
Collapse
|
305
|
Ahmad A, Asif A, Rajpoot N, Arif M, Minhas FUAA. Correlation Filters for Detection of Cellular Nuclei in Histopathology Images. J Med Syst 2017; 42:7. [DOI: 10.1007/s10916-017-0863-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2016] [Accepted: 11/13/2017] [Indexed: 12/13/2022]
|
306
|
Caliskan A, Yuksel ME. Classification of coronary artery disease data sets by using a deep neural network. EUROBIOTECH JOURNAL 2017. [DOI: 10.24190/issn2564-615x/2017/04.03] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Abstract
In this study, a deep neural network classifier is proposed for the classification of coronary artery disease medical data sets. The proposed classifier is tested on reference CAD data sets from the literature and also compared with popular representative classification methods regarding its classification performance. Experimental results show that the deep neural network classifier offers much better accuracy, sensitivity and specificity rates when compared with other methods. The proposed method presents itself as an easily accessible and cost-effective alternative to currently existing methods used for the diagnosis of CAD and it can be applied for easily checking whether a given subject under examination has at least one occluded coronary artery or not.
Collapse
Affiliation(s)
- Abdullah Caliskan
- Department of Biomedical Engineering, Erciyes University, Kayseri , Turkey
| | - Mehmet Emin Yuksel
- Department of Biomedical Engineering, Erciyes University, Kayseri , Turkey
| |
Collapse
|
307
|
|
308
|
Cui J, Liu X, Wang Y, Liu H. Deep reconstruction model for dynamic PET images. PLoS One 2017; 12:e0184667. [PMID: 28934254 PMCID: PMC5608245 DOI: 10.1371/journal.pone.0184667] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2017] [Accepted: 08/28/2017] [Indexed: 11/18/2022] Open
Abstract
Accurate and robust tomographic reconstruction from dynamic positron emission tomography (PET) acquired data is a difficult problem. Conventional methods, such as the maximum likelihood expectation maximization (MLEM) algorithm for reconstructing the activity distribution-based on individual frames, may lead to inaccurate results due to the checkerboard effect and limitation of photon counts. In this paper, we propose a stacked sparse auto-encoder based reconstruction framework for dynamic PET imaging. The dynamic reconstruction problem is formulated in a deep learning representation, where the encoding layers extract the prototype features, such as edges, so that, in the decoding layers, the reconstructed results are obtained through a combination of those features. The qualitative and quantitative results of the procedure, including the data based on a Monte Carlo simulation and real patient data demonstrates the effectiveness of our method.
Collapse
Affiliation(s)
- Jianan Cui
- State Key Laboratory of Modern Optical Instrumentation, Department of Optical Engineering, Zhejiang University, Hangzhou, China
| | - Xin Liu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzheng, China
| | - Yile Wang
- State Key Laboratory of Modern Optical Instrumentation, Department of Optical Engineering, Zhejiang University, Hangzhou, China
| | - Huafeng Liu
- State Key Laboratory of Modern Optical Instrumentation, Department of Optical Engineering, Zhejiang University, Hangzhou, China
- * E-mail:
| |
Collapse
|
309
|
Shi J, Wu J, Li Y, Zhang Q, Ying S. Histopathological Image Classification With Color Pattern Random Binary Hashing-Based PCANet and Matrix-Form Classifier. IEEE J Biomed Health Inform 2017; 21:1327-1337. [DOI: 10.1109/jbhi.2016.2602823] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
310
|
Breast cancer cell nuclei classification in histopathology images using deep neural networks. Int J Comput Assist Radiol Surg 2017; 13:179-191. [DOI: 10.1007/s11548-017-1663-9] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2017] [Accepted: 08/18/2017] [Indexed: 12/12/2022]
|
311
|
Yan Y, Tan Z, Su N, Zhao C. Building Extraction Based on an Optimized Stacked Sparse Autoencoder of Structure and Training Samples Using LIDAR DSM and Optical Images. SENSORS 2017; 17:s17091957. [PMID: 28837118 PMCID: PMC5621110 DOI: 10.3390/s17091957] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/13/2017] [Revised: 08/23/2017] [Accepted: 08/24/2017] [Indexed: 11/30/2022]
Abstract
In this paper, a building extraction method is proposed based on a stacked sparse autoencoder with an optimized structure and training samples. Building extraction plays an important role in urban construction and planning. However, some negative effects will reduce the accuracy of extraction, such as exceeding resolution, bad correction and terrain influence. Data collected by multiple sensors, as light detection and ranging (LIDAR), optical sensor etc., are used to improve the extraction. Using digital surface model (DSM) obtained from LIDAR data and optical images, traditional method can improve the extraction effect to a certain extent, but there are some defects in feature extraction. Since stacked sparse autoencoder (SSAE) neural network can learn the essential characteristics of the data in depth, SSAE was employed to extract buildings from the combined DSM data and optical image. A better setting strategy of SSAE network structure is given, and an idea of setting the number and proportion of training samples for better training of SSAE was presented. The optical data and DSM were combined as input of the optimized SSAE, and after training by an optimized samples, the appropriate network structure can extract buildings with great accuracy and has good robustness.
Collapse
Affiliation(s)
- Yiming Yan
- Department of information engineering, Harbin Engineering University, Harbin 150001, China.
| | - Zhichao Tan
- Department of information engineering, Harbin Engineering University, Harbin 150001, China.
| | - Nan Su
- Department of information engineering, Harbin Engineering University, Harbin 150001, China.
| | - Chunhui Zhao
- Department of information engineering, Harbin Engineering University, Harbin 150001, China.
| |
Collapse
|
312
|
Xu ZC, Wang P, Qiu WR, Xiao X. iSS-PC: Identifying Splicing Sites via Physical-Chemical Properties Using Deep Sparse Auto-Encoder. Sci Rep 2017; 7:8222. [PMID: 28811565 PMCID: PMC5557945 DOI: 10.1038/s41598-017-08523-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Accepted: 07/10/2017] [Indexed: 12/13/2022] Open
Abstract
Gene splicing is one of the most significant biological processes in eukaryotic gene expression, such as RNA splicing, which can cause a pre-mRNA to produce one or more mature messenger RNAs containing the coded information with multiple biological functions. Thus, identifying splicing sites in DNA/RNA sequences is significant for both the bio-medical research and the discovery of new drugs. However, it is expensive and time consuming based only on experimental technique, so new computational methods are needed. To identify the splice donor sites and splice acceptor sites accurately and quickly, a deep sparse auto-encoder model with two hidden layers, called iSS-PC, was constructed based on minimum error law, in which we incorporated twelve physical-chemical properties of the dinucleotides within DNA into PseDNC to formulate given sequence samples via a battery of cross-covariance and auto-covariance transformations. In this paper, five-fold cross-validation test results based on the same benchmark data-sets indicated that the new predictor remarkably outperformed the existing prediction methods in this field. Furthermore, it is expected that many other related problems can be also studied by this approach. To implement classification accurately and quickly, an easy-to-use web-server for identifying slicing sites has been established for free access at: http://www.jci-bioinfo.cn/iSS-PC.
Collapse
Affiliation(s)
- Zhao-Chun Xu
- Computer Department, Jing-De-Zhen Ceramic Institute, Jing-De-Zhen, 333403, China.
| | - Peng Wang
- Computer Department, Jing-De-Zhen Ceramic Institute, Jing-De-Zhen, 333403, China
| | - Wang-Ren Qiu
- Computer Department, Jing-De-Zhen Ceramic Institute, Jing-De-Zhen, 333403, China.
- Department of Computer Science and Bond Life Science Center, University of Missouri, Columbia, MO, USA.
| | - Xuan Xiao
- Computer Department, Jing-De-Zhen Ceramic Institute, Jing-De-Zhen, 333403, China.
- Gordon Life Science Institute, Boston, Massachusetts, 02478, United States of America.
| |
Collapse
|
313
|
Song Y, Li Q, Huang H, Feng D, Chen M, Cai W. Low Dimensional Representation of Fisher Vectors for Microscopy Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1636-1649. [PMID: 28358678 DOI: 10.1109/tmi.2017.2687466] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Microscopy image classification is important in various biomedical applications, such as cancer subtype identification, and protein localization for high content screening. To achieve automated and effective microscopy image classification, the representative and discriminative capability of image feature descriptors is essential. To this end, in this paper, we propose a new feature representation algorithm to facilitate automated microscopy image classification. In particular, we incorporate Fisher vector (FV) encoding with multiple types of local features that are handcrafted or learned, and we design a separation-guided dimension reduction method to reduce the descriptor dimension while increasing its discriminative capability. Our method is evaluated on four publicly available microscopy image data sets of different imaging types and applications, including the UCSB breast cancer data set, MICCAI 2015 CBTC challenge data set, and IICBU malignant lymphoma, and RNAi data sets. Our experimental results demonstrate the advantage of the proposed low-dimensional FV representation, showing consistent performance improvement over the existing state of the art and the commonly used dimension reduction techniques.
Collapse
|
314
|
Zhang J, Zhong Y, Wang X, Ni G, Du X, Liu J, Liu L, Liu Y. Computerized detection of leukocytes in microscopic leukorrhea images. Med Phys 2017; 44:4620-4629. [DOI: 10.1002/mp.12381] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2017] [Revised: 05/15/2017] [Accepted: 05/19/2017] [Indexed: 11/06/2022] Open
Affiliation(s)
- Jing Zhang
- School of Optoelectronic Information; University of Electronic Science and Technology of China; Chengdu 610054 China
| | - Ya Zhong
- School of Optoelectronic Information; University of Electronic Science and Technology of China; Chengdu 610054 China
| | - Xiangzhou Wang
- School of Optoelectronic Information; University of Electronic Science and Technology of China; Chengdu 610054 China
| | - Guangming Ni
- School of Optoelectronic Information; University of Electronic Science and Technology of China; Chengdu 610054 China
| | - Xiaohui Du
- School of Optoelectronic Information; University of Electronic Science and Technology of China; Chengdu 610054 China
| | - Juanxiu Liu
- School of Optoelectronic Information; University of Electronic Science and Technology of China; Chengdu 610054 China
| | - Lin Liu
- School of Optoelectronic Information; University of Electronic Science and Technology of China; Chengdu 610054 China
| | - Yong Liu
- School of Optoelectronic Information; University of Electronic Science and Technology of China; Chengdu 610054 China
| |
Collapse
|
315
|
Abstract
This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.
Collapse
Affiliation(s)
- Dinggang Shen
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina 27599;
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea;
| | - Guorong Wu
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina 27599;
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea;
| |
Collapse
|
316
|
An Advanced Deep Learning Approach for Ki-67 Stained Hotspot Detection and Proliferation Rate Scoring for Prognostic Evaluation of Breast Cancer. Sci Rep 2017; 7:3213. [PMID: 28607456 PMCID: PMC5468356 DOI: 10.1038/s41598-017-03405-5] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2017] [Accepted: 04/26/2017] [Indexed: 02/08/2023] Open
Abstract
Being a non-histone protein, Ki-67 is one of the essential biomarkers for the immunohistochemical assessment of proliferation rate in breast cancer screening and grading. The Ki-67 signature is always sensitive to radiotherapy and chemotherapy. Due to random morphological, color and intensity variations of cell nuclei (immunopositive and immunonegative), manual/subjective assessment of Ki-67 scoring is error-prone and time-consuming. Hence, several machine learning approaches have been reported; nevertheless, none of them had worked on deep learning based hotspots detection and proliferation scoring. In this article, we suggest an advanced deep learning model for computerized recognition of candidate hotspots and subsequent proliferation rate scoring by quantifying Ki-67 appearance in breast cancer immunohistochemical images. Unlike existing Ki-67 scoring techniques, our methodology uses Gamma mixture model (GMM) with Expectation-Maximization for seed point detection and patch selection and deep learning, comprises with decision layer, for hotspots detection and proliferation scoring. Experimental results provide 93% precision, 0.88% recall and 0.91% F-score value. The model performance has also been compared with the pathologists’ manual annotations and recently published articles. In future, the proposed deep learning framework will be highly reliable and beneficial to the junior and senior pathologists for fast and efficient Ki-67 scoring.
Collapse
|
317
|
Wan X, Zhao C. Local receptive field constrained stacked sparse autoencoder for classification of hyperspectral images. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2017; 34:1011-1020. [PMID: 29036085 DOI: 10.1364/josaa.34.001011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Accepted: 04/20/2017] [Indexed: 06/07/2023]
Abstract
As a competitive machine learning algorithm, the stacked sparse autoencoder (SSA) has achieved outstanding popularity in exploiting high-level features for classification of hyperspectral images (HSIs). In general, in the SSA architecture, the nodes between adjacent layers are fully connected and need to be iteratively fine-tuned during the pretraining stage; however, the nodes of previous layers further away may be less likely to have a dense correlation to the given node of subsequent layers. Therefore, to reduce the classification error and increase the learning rate, this paper proposes the general framework of locally connected SSA; that is, the biologically inspired local receptive field (LRF) constrained SSA architecture is employed to simultaneously characterize the local correlations of spectral features and extract high-level feature representations of hyperspectral data. In addition, the appropriate receptive field constraint is concurrently updated by measuring the spatial distances from the neighbor nodes to the corresponding node. Finally, the efficient random forest classifier is cascaded to the last hidden layer of the SSA architecture as a benchmark classifier. Experimental results on two real HSI datasets demonstrate that the proposed hierarchical LRF constrained stacked sparse autoencoder and random forest (SSARF) provides encouraging results with respect to other contrastive methods, for instance, the improvements of overall accuracy in a range of 0.72%-10.87% for the Indian Pines dataset and 0.74%-7.90% for the Kennedy Space Center dataset; moreover, it generates lower running time compared with the result provided by similar SSARF based methodology.
Collapse
|
318
|
Choroid segmentation from Optical Coherence Tomography with graph-edge weights learned from deep convolutional neural networks. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.01.023] [Citation(s) in RCA: 92] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
319
|
夏 靖, 纪 小. 计算机深度学习与智能图像诊断对胃高分化腺癌病理诊断的价值. Shijie Huaren Xiaohua Zazhi 2017; 25:1043-1049. [DOI: 10.11569/wcjd.v25.i12.1043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
随着计算机技术的发展, 机器学习被深入研究并应用到各个领域, 机器学习在医学中的应用将转换现在的医学模式, 利用机器学习处理医学中庞大数据可提高医生诊断准确率, 指导治疗, 评估预后. 机器学习中的深度学习已广泛应用在病理智能图像诊断方面, 目前在有丝分裂检测, 细胞核的分割和检测, 组织分类中已取得较好成效. 在病理组织学上, 胃高分化腺癌因其组织结构和细胞形态异型性小, 取材标本表浅等原因容易漏诊. 现有的早期胃癌的病理智能图像诊断系统中没有关于腺腔圆度的研究, 圆度测量可以将腺腔结构的不规则, 腺腔扩张等特征转换为具体数值的定量指标, 通过数值大小来进行诊断分析, 为病理诊断提供参考价值.
Collapse
|
320
|
Alegro M, Theofilas P, Nguy A, Castruita PA, Seeley W, Heinsen H, Ushizima DM, Grinberg LT. Automating cell detection and classification in human brain fluorescent microscopy images using dictionary learning and sparse coding. J Neurosci Methods 2017; 282:20-33. [PMID: 28267565 PMCID: PMC5600818 DOI: 10.1016/j.jneumeth.2017.03.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2016] [Revised: 02/28/2017] [Accepted: 03/02/2017] [Indexed: 10/20/2022]
Abstract
BACKGROUND Immunofluorescence (IF) plays a major role in quantifying protein expression in situ and understanding cell function. It is widely applied in assessing disease mechanisms and in drug discovery research. Automation of IF analysis can transform studies using experimental cell models. However, IF analysis of postmortem human tissue relies mostly on manual interaction, often subjected to low-throughput and prone to error, leading to low inter and intra-observer reproducibility. Human postmortem brain samples challenges neuroscientists because of the high level of autofluorescence caused by accumulation of lipofuscin pigment during aging, hindering systematic analyses. We propose a method for automating cell counting and classification in IF microscopy of human postmortem brains. Our algorithm speeds up the quantification task while improving reproducibility. NEW METHOD Dictionary learning and sparse coding allow for constructing improved cell representations using IF images. These models are input for detection and segmentation methods. Classification occurs by means of color distances between cells and a learned set. RESULTS Our method successfully detected and classified cells in 49 human brain images. We evaluated our results regarding true positive, false positive, false negative, precision, recall, false positive rate and F1 score metrics. We also measured user-experience and time saved compared to manual countings. COMPARISON WITH EXISTING METHODS We compared our results to four open-access IF-based cell-counting tools available in the literature. Our method showed improved accuracy for all data samples. CONCLUSION The proposed method satisfactorily detects and classifies cells from human postmortem brain IF images, with potential to be generalized for applications in other counting tasks.
Collapse
Affiliation(s)
- Maryana Alegro
- Memory and Aging Center, University of California San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA.
| | - Panagiotis Theofilas
- Memory and Aging Center, University of California San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA.
| | - Austin Nguy
- Memory and Aging Center, University of California San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA.
| | - Patricia A Castruita
- Memory and Aging Center, University of California San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA.
| | - William Seeley
- Memory and Aging Center, University of California San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA.
| | - Helmut Heinsen
- Medical School of the University of São Paulo, Av. Reboucas 381, São Paulo, SP 05401-000, Brazil.
| | - Daniela M Ushizima
- Computational Research Division, Lawrence Berkeley National Laboratory, 1 Cyclotron Rd, Berkeley, CA 94720, USA; Berkeley Institute for Data Science, University of California Berkeley, Berkeley, CA 94720, USA.
| | - Lea T Grinberg
- Memory and Aging Center, University of California San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA.
| |
Collapse
|
321
|
Chen JM, Li Y, Xu J, Gong L, Wang LW, Liu WL, Liu J. Computer-aided prognosis on breast cancer with hematoxylin and eosin histopathology images: A review. Tumour Biol 2017; 39:1010428317694550. [PMID: 28347240 DOI: 10.1177/1010428317694550] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
With the advance of digital pathology, image analysis has begun to show its advantages in information analysis of hematoxylin and eosin histopathology images. Generally, histological features in hematoxylin and eosin images are measured to evaluate tumor grade and prognosis for breast cancer. This review summarized recent works in image analysis of hematoxylin and eosin histopathology images for breast cancer prognosis. First, prognostic factors for breast cancer based on hematoxylin and eosin histopathology images were summarized. Then, usual procedures of image analysis for breast cancer prognosis were systematically reviewed, including image acquisition, image preprocessing, image detection and segmentation, and feature extraction. Finally, the prognostic value of image features and image feature–based prognostic models was evaluated. Moreover, we discussed the issues of current analysis, and some directions for future research.
Collapse
Affiliation(s)
- Jia-Mei Chen
- Department of Oncology, Zhongnan Hospital of Wuhan University, Hubei Key Laboratory of Tumor Biological Behaviors & Hubei Cancer Clinical Study Center, Wuhan, China
| | - Yan Li
- Department of Oncology, Zhongnan Hospital of Wuhan University, Hubei Key Laboratory of Tumor Biological Behaviors & Hubei Cancer Clinical Study Center, Wuhan, China
- Department of Peritoneal Cancer Surgery, Beijing Shijitan Hospital of Capital Medical University, Beijing, China
| | - Jun Xu
- Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing University of Information Science and Technology, Nanjing, China
| | - Lei Gong
- Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing University of Information Science and Technology, Nanjing, China
| | - Lin-Wei Wang
- Department of Oncology, Zhongnan Hospital of Wuhan University, Hubei Key Laboratory of Tumor Biological Behaviors & Hubei Cancer Clinical Study Center, Wuhan, China
| | - Wen-Lou Liu
- Department of Oncology, Zhongnan Hospital of Wuhan University, Hubei Key Laboratory of Tumor Biological Behaviors & Hubei Cancer Clinical Study Center, Wuhan, China
| | - Juan Liu
- State Key Laboratory of Software Engineering, School of Computer, Wuhan University, Wuhan, China
| |
Collapse
|
322
|
Wang Q, Zheng Y, Yang G, Jin W, Chen X, Yin Y. Multiscale Rotation-Invariant Convolutional Neural Networks for Lung Texture Classification. IEEE J Biomed Health Inform 2017; 22:184-195. [PMID: 28333649 DOI: 10.1109/jbhi.2017.2685586] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
We propose a new multiscale rotation-invariant convolutional neural network (MRCNN) model for classifying various lung tissue types on high-resolution computed tomography. MRCNN employs Gabor-local binary pattern that introduces a good property in image analysis-invariance to image scales and rotations. In addition, we offer an approach to deal with the problems caused by imbalanced number of samples between different classes in most of the existing works, accomplished by changing the overlapping size between the adjacent patches. Experimental results on a public interstitial lung disease database show a superior performance of the proposed method to state of the art.
Collapse
|
323
|
Pan X, Li L, Yang H, Liu Z, Yang J, Zhao L, Fan Y. Accurate segmentation of nuclei in pathological images via sparse reconstruction and deep convolutional networks. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.08.103] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
324
|
Wan T, Cao J, Chen J, Qin Z. Automated grading of breast cancer histopathology using cascaded ensemble with combination of multi-level image features. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.05.084] [Citation(s) in RCA: 72] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
325
|
Chen H, Zhang Y, Zhang W, Liao P, Li K, Zhou J, Wang G. Low-dose CT via convolutional neural network. BIOMEDICAL OPTICS EXPRESS 2017; 8:679-694. [PMID: 28270976 PMCID: PMC5330597 DOI: 10.1364/boe.8.000679] [Citation(s) in RCA: 332] [Impact Index Per Article: 41.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2016] [Revised: 12/26/2016] [Accepted: 12/27/2016] [Indexed: 05/11/2023]
Abstract
In order to reduce the potential radiation risk, low-dose CT has attracted an increasing attention. However, simply lowering the radiation dose will significantly degrade the image quality. In this paper, we propose a new noise reduction method for low-dose CT via deep learning without accessing original projection data. A deep convolutional neural network is here used to map low-dose CT images towards its corresponding normal-dose counterparts in a patch-by-patch fashion. Qualitative results demonstrate a great potential of the proposed method on artifact reduction and structure preservation. In terms of the quantitative metrics, the proposed method has showed a substantial improvement on PSNR, RMSE and SSIM than the competing state-of-art methods. Furthermore, the speed of our method is one order of magnitude faster than the iterative reconstruction and patch-based image denoising methods.
Collapse
Affiliation(s)
- Hu Chen
- College of Computer Science, Sichuan University, Chengdu 610065, China
- National Key Laboratory of Fundamental Science on Synthetic Vision, Sichuan University, Chengdu 610065, China
| | - Yi Zhang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Weihua Zhang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Peixi Liao
- Department of Scientific Research and Education, The Sixth People’s Hospital of Chengdu, Chengdu 610065, China
| | - Ke Li
- College of Computer Science, Sichuan University, Chengdu 610065, China
- National Key Laboratory of Fundamental Science on Synthetic Vision, Sichuan University, Chengdu 610065, China
| | - Jiliu Zhou
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| |
Collapse
|
326
|
Hassan TM, Elmogy M, Sallam ES. Diagnosis of Focal Liver Diseases Based on Deep Learning Technique for Ultrasound Images. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2017. [DOI: 10.1007/s13369-016-2387-9] [Citation(s) in RCA: 56] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
327
|
Di Cataldo S, Ficarra E. Mining textural knowledge in biological images: Applications, methods and trends. Comput Struct Biotechnol J 2016; 15:56-67. [PMID: 27994798 PMCID: PMC5155047 DOI: 10.1016/j.csbj.2016.11.002] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2016] [Revised: 11/14/2016] [Accepted: 11/15/2016] [Indexed: 12/18/2022] Open
Abstract
Texture analysis is a major task in many areas of computer vision and pattern recognition, including biological imaging. Indeed, visual textures can be exploited to distinguish specific tissues or cells in a biological sample, to highlight chemical reactions between molecules, as well as to detect subcellular patterns that can be evidence of certain pathologies. This makes automated texture analysis fundamental in many applications of biomedicine, such as the accurate detection and grading of multiple types of cancer, the differential diagnosis of autoimmune diseases, or the study of physiological processes. Due to their specific characteristics and challenges, the design of texture analysis systems for biological images has attracted ever-growing attention in the last few years. In this paper, we perform a critical review of this important topic. First, we provide a general definition of texture analysis and discuss its role in the context of bioimaging, with examples of applications from the recent literature. Then, we review the main approaches to automated texture analysis, with special attention to the methods of feature extraction and encoding that can be successfully applied to microscopy images of cells or tissues. Our aim is to provide an overview of the state of the art, as well as a glimpse into the latest and future trends of research in this area.
Collapse
Affiliation(s)
- Santa Di Cataldo
- Dept. of Computer and Control Engineering, Politecnico di Torino, Cso Duca degli Abruzzi 24, Torino 10129, Italy
| | | |
Collapse
|
328
|
Chen H, Qi X, Yu L, Dou Q, Qin J, Heng PA. DCAN: Deep contour-aware networks for object instance segmentation from histology images. Med Image Anal 2016; 36:135-146. [PMID: 27898306 DOI: 10.1016/j.media.2016.11.004] [Citation(s) in RCA: 201] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2016] [Revised: 11/09/2016] [Accepted: 11/10/2016] [Indexed: 12/15/2022]
Abstract
In histopathological image analysis, the morphology of histological structures, such as glands and nuclei, has been routinely adopted by pathologists to assess the malignancy degree of adenocarcinomas. Accurate detection and segmentation of these objects of interest from histology images is an essential prerequisite to obtain reliable morphological statistics for quantitative diagnosis. While manual annotation is error-prone, time-consuming and operator-dependant, automated detection and segmentation of objects of interest from histology images can be very challenging due to the large appearance variation, existence of strong mimics, and serious degeneration of histological structures. In order to meet these challenges, we propose a novel deep contour-aware network (DCAN) under a unified multi-task learning framework for more accurate detection and segmentation. In the proposed network, multi-level contextual features are explored based on an end-to-end fully convolutional network (FCN) to deal with the large appearance variation. We further propose to employ an auxiliary supervision mechanism to overcome the problem of vanishing gradients when training such a deep network. More importantly, our network can not only output accurate probability maps of histological objects, but also depict clear contours simultaneously for separating clustered object instances, which further boosts the segmentation performance. Our method ranked the first in two histological object segmentation challenges, including 2015 MICCAI Gland Segmentation Challenge and 2015 MICCAI Nuclei Segmentation Challenge. Extensive experiments on these two challenging datasets demonstrate the superior performance of our method, surpassing all the other methods by a significant margin.
Collapse
Affiliation(s)
- Hao Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| | - Xiaojuan Qi
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Lequan Yu
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
329
|
Leo P, Lee G, Shih NNC, Elliott R, Feldman MD, Madabhushi A. Evaluating stability of histomorphometric features across scanner and staining variations: prostate cancer diagnosis from whole slide images. J Med Imaging (Bellingham) 2016; 3:047502. [PMID: 27803941 DOI: 10.1117/1.jmi.3.4.047502] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2016] [Accepted: 09/16/2016] [Indexed: 01/04/2023] Open
Abstract
Quantitative histomorphometry (QH) is the process of computerized feature extraction from digitized tissue slide images to predict disease presence, behavior, and outcome. Feature stability between sites may be compromised by laboratory-specific variables including dye batch, slice thickness, and the whole slide scanner used. We present two new measures, preparation-induced instability score and latent instability score, to quantify feature instability across and within datasets. In a use case involving prostate cancer, we examined QH features which may detect cancer on whole slide images. Using our method, we found that five feature families (graph, shape, co-occurring gland tensor, sub-graph, and texture) were different between datasets in 19.7% to 48.6% of comparisons while the values expected without site variation were 4.2% to 4.6%. Color normalizing all images to a template did not reduce instability. Scanning the same 34 slides on three scanners demonstrated that Haralick features were most substantively affected by scanner variation, being unstable in 62% of comparisons. We found that unstable feature families performed significantly worse in inter- than intrasite classification. Our results appear to suggest QH features should be evaluated across sites to assess robustness, and class discriminability alone should not represent the benchmark for digital pathology feature selection.
Collapse
Affiliation(s)
- Patrick Leo
- Case Western Reserve University , Department of Biomedical Engineering, 2071 Martin Luther King Jr. Drive, Cleveland, Ohio 44106, United States
| | - George Lee
- Case Western Reserve University , Department of Biomedical Engineering, 2071 Martin Luther King Jr. Drive, Cleveland, Ohio 44106, United States
| | - Natalie N C Shih
- University of Pennsylvania , Department of Pathology, 3400 Spruce Street, Philadelphia, Pennsylvania 19104, United States
| | - Robin Elliott
- Case Western Reserve University , Department of Pathology, 11100 Euclid Avenue, Cleveland, Ohio 44106, United States
| | - Michael D Feldman
- University of Pennsylvania , Department of Pathology, 3400 Spruce Street, Philadelphia, Pennsylvania 19104, United States
| | - Anant Madabhushi
- Case Western Reserve University , Department of Biomedical Engineering, 2071 Martin Luther King Jr. Drive, Cleveland, Ohio 44106, United States
| |
Collapse
|
330
|
Lu C, Xu H, Xu J, Gilmore H, Mandal M, Madabhushi A. Multi-Pass Adaptive Voting for Nuclei Detection in Histopathological Images. Sci Rep 2016; 6:33985. [PMID: 27694950 PMCID: PMC5046183 DOI: 10.1038/srep33985] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Accepted: 09/02/2016] [Indexed: 12/15/2022] Open
Abstract
Nuclei detection is often a critical initial step in the development of computer aided diagnosis and prognosis schemes in the context of digital pathology images. While over the last few years, a number of nuclei detection methods have been proposed, most of these approaches make idealistic assumptions about the staining quality of the tissue. In this paper, we present a new Multi-Pass Adaptive Voting (MPAV) for nuclei detection which is specifically geared towards images with poor quality staining and noise on account of tissue preparation artifacts. The MPAV utilizes the symmetric property of nuclear boundary and adaptively selects gradient from edge fragments to perform voting for a potential nucleus location. The MPAV was evaluated in three cohorts with different staining methods: Hematoxylin &Eosin, CD31 &Hematoxylin, and Ki-67 and where most of the nuclei were unevenly and imprecisely stained. Across a total of 47 images and nearly 17,700 manually labeled nuclei serving as the ground truth, MPAV was able to achieve a superior performance, with an area under the precision-recall curve (AUC) of 0.73. Additionally, MPAV also outperformed three state-of-the-art nuclei detection methods, a single pass voting method, a multi-pass voting method, and a deep learning based method.
Collapse
Affiliation(s)
- Cheng Lu
- College of Computer Science, Shaanxi Normal University, Xi’an, Shaanxi Province, 710119, China
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106-7207, USA
| | - Hongming Xu
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, T6G 2V4, Canada
| | - Jun Xu
- Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Hannah Gilmore
- Department of Pathology-Anatomic, University Hospitals Case Medial Center, Case Western Reserve University, Cleveland, OH, 44106-7207, USA
| | - Mrinal Mandal
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, T6G 2V4, Canada
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106-7207, USA
| |
Collapse
|
331
|
Janowczyk A, Madabhushi A. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases. J Pathol Inform 2016; 7:29. [PMID: 27563488 PMCID: PMC4977982 DOI: 10.4103/2153-3539.186902] [Citation(s) in RCA: 564] [Impact Index Per Article: 62.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2015] [Accepted: 03/18/2016] [Indexed: 01/14/2023] Open
Abstract
BACKGROUND Deep learning (DL) is a representation learning approach ideally suited for image analysis challenges in digital pathology (DP). The variety of image analysis tasks in the context of DP includes detection and counting (e.g., mitotic events), segmentation (e.g., nuclei), and tissue classification (e.g., cancerous vs. non-cancerous). Unfortunately, issues with slide preparation, variations in staining and scanning across sites, and vendor platforms, as well as biological variance, such as the presentation of different grades of disease, make these image analysis tasks particularly challenging. Traditional approaches, wherein domain-specific cues are manually identified and developed into task-specific "handcrafted" features, can require extensive tuning to accommodate these variances. However, DL takes a more domain agnostic approach combining both feature discovery and implementation to maximally discriminate between the classes of interest. While DL approaches have performed well in a few DP related image analysis tasks, such as detection and tissue classification, the currently available open source tools and tutorials do not provide guidance on challenges such as (a) selecting appropriate magnification, (b) managing errors in annotations in the training (or learning) dataset, and (c) identifying a suitable training set containing information rich exemplars. These foundational concepts, which are needed to successfully translate the DL paradigm to DP tasks, are non-trivial for (i) DL experts with minimal digital histology experience, and (ii) DP and image processing experts with minimal DL experience, to derive on their own, thus meriting a dedicated tutorial. AIMS This paper investigates these concepts through seven unique DP tasks as use cases to elucidate techniques needed to produce comparable, and in many cases, superior to results from the state-of-the-art hand-crafted feature-based classification approaches. RESULTS Specifically, in this tutorial on DL for DP image analysis, we show how an open source framework (Caffe), with a singular network architecture, can be used to address: (a) nuclei segmentation (F-score of 0.83 across 12,000 nuclei), (b) epithelium segmentation (F-score of 0.84 across 1735 regions), (c) tubule segmentation (F-score of 0.83 from 795 tubules), (d) lymphocyte detection (F-score of 0.90 across 3064 lymphocytes), (e) mitosis detection (F-score of 0.53 across 550 mitotic events), (f) invasive ductal carcinoma detection (F-score of 0.7648 on 50 k testing patches), and (g) lymphoma classification (classification accuracy of 0.97 across 374 images). CONCLUSION This paper represents the largest comprehensive study of DL approaches in DP to date, with over 1200 DP images used during evaluation. The supplemental online material that accompanies this paper consists of step-by-step instructions for the usage of the supplied source code, trained models, and input data.
Collapse
Affiliation(s)
- Andrew Janowczyk
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| |
Collapse
|
332
|
Kather JN, Weis CA, Bianconi F, Melchers SM, Schad LR, Gaiser T, Marx A, Zöllner FG. Multi-class texture analysis in colorectal cancer histology. Sci Rep 2016; 6:27988. [PMID: 27306927 PMCID: PMC4910082 DOI: 10.1038/srep27988] [Citation(s) in RCA: 168] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2016] [Accepted: 05/25/2016] [Indexed: 02/08/2023] Open
Abstract
Automatic recognition of different tissue types in histological images is an essential part in the digital pathology toolbox. Texture analysis is commonly used to address this problem; mainly in the context of estimating the tumour/stroma ratio on histological samples. However, although histological images typically contain more than two tissue types, only few studies have addressed the multi-class problem. For colorectal cancer, one of the most prevalent tumour types, there are in fact no published results on multiclass texture separation. In this paper we present a new dataset of 5,000 histological images of human colorectal cancer including eight different types of tissue. We used this set to assess the classification performance of a wide range of texture descriptors and classifiers. As a result, we found an optimal classification strategy that markedly outperformed traditional methods, improving the state of the art for tumour-stroma separation from 96.9% to 98.6% accuracy and setting a new standard for multiclass tissue separation (87.4% accuracy for eight classes). We make our dataset of histological images publicly available under a Creative Commons license and encourage other researchers to use it as a benchmark for their studies.
Collapse
Affiliation(s)
- Jakob Nikolas Kather
- Institute of Pathology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
- Institute of Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Cleo-Aron Weis
- Institute of Pathology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | | | - Susanne M. Melchers
- Department of Dermatology, Venereology and Allergology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Lothar R. Schad
- Institute of Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Timo Gaiser
- Institute of Pathology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Alexander Marx
- Institute of Pathology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Frank Gerrit Zöllner
- Institute of Computer Assisted Clinical Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
333
|
Stain Normalization using Sparse AutoEncoders (StaNoSA): Application to digital pathology. Comput Med Imaging Graph 2016; 57:50-61. [PMID: 27373749 DOI: 10.1016/j.compmedimag.2016.05.003] [Citation(s) in RCA: 117] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2015] [Revised: 04/04/2016] [Accepted: 05/13/2016] [Indexed: 12/17/2022]
Abstract
Digital histopathology slides have many sources of variance, and while pathologists typically do not struggle with them, computer aided diagnostic algorithms can perform erratically. This manuscript presents Stain Normalization using Sparse AutoEncoders (StaNoSA) for use in standardizing the color distributions of a test image to that of a single template image. We show how sparse autoencoders can be leveraged to partition images into tissue sub-types, so that color standardization for each can be performed independently. StaNoSA was validated on three experiments and compared against five other color standardization approaches and shown to have either comparable or superior results.
Collapse
|
334
|
Sirinukunwattana K, Ahmed Raza SE, Snead DRJ, Cree IA, Rajpoot NM. Locality Sensitive Deep Learning for Detection and Classification of Nuclei in Routine Colon Cancer Histology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1196-1206. [PMID: 26863654 DOI: 10.1109/tmi.2016.2525803] [Citation(s) in RCA: 505] [Impact Index Per Article: 56.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Detection and classification of cell nuclei in histopathology images of cancerous tissue stained with the standard hematoxylin and eosin stain is a challenging task due to cellular heterogeneity. Deep learning approaches have been shown to produce encouraging results on histopathology images in various studies. In this paper, we propose a Spatially Constrained Convolutional Neural Network (SC-CNN) to perform nucleus detection. SC-CNN regresses the likelihood of a pixel being the center of a nucleus, where high probability values are spatially constrained to locate in the vicinity of the centers of nuclei. For classification of nuclei, we propose a novel Neighboring Ensemble Predictor (NEP) coupled with CNN to more accurately predict the class label of detected cell nuclei. The proposed approaches for detection and classification do not require segmentation of nuclei. We have evaluated them on a large dataset of colorectal adenocarcinoma images, consisting of more than 20,000 annotated nuclei belonging to four different classes. Our results show that the joint detection and classification of the proposed SC-CNN and NEP produces the highest average F1 score as compared to other recently published approaches. Prospectively, the proposed methods could offer benefit to pathology practice in terms of quantitative analysis of tissue constituents in whole-slide images, and potentially lead to a better understanding of cancer.
Collapse
|
335
|
Janowczyk A, Doyle S, Gilmore H, Madabhushi A. A resolution adaptive deep hierarchical (RADHicaL) learning scheme applied to nuclear segmentation of digital pathology images. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2016; 6:270-276. [PMID: 29732269 PMCID: PMC5935259 DOI: 10.1080/21681163.2016.1141063] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Deep learning (DL) has recently been successfully applied to a number of image analysis problems. However, DL approaches tend to be inefficient for segmentation on large image data, such as high-resolution digital pathology slide images. For example, typical breast biopsy images scanned at 40× magnification contain billions of pixels, of which usually only a small percentage belong to the class of interest. For a typical naïve deep learning scheme, parsing through and interrogating all the image pixels would represent hundreds if not thousands of hours of compute time using high performance computing environments. In this paper, we present a resolution adaptive deep hierarchical (RADHicaL) learning scheme wherein DL networks at lower resolutions are leveraged to determine if higher levels of magnification, and thus computation, are necessary to provide precise results. We evaluate our approach on a nuclear segmentation task with a cohort of 141 ER+ breast cancer images and show we can reduce computation time on average by about 85%. Expert annotations of 12,000 nuclei across these 141 images were employed for quantitative evaluation of RADHicaL. A head-to-head comparison with a naïve DL approach, operating solely at the highest magnification, yielded the following performance metrics: .9407 vs .9854 Detection Rate, .8218 vs .8489 F-score, .8061 vs .8364 true positive rate and .8822 vs 0.8932 positive predictive value. Our performance indices compare favourably with state of the art nuclear segmentation approaches for digital pathology images.
Collapse
Affiliation(s)
- Andrew Janowczyk
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Scott Doyle
- Pathology & Anatomical Sciences, SUNY Buffalo, Buffalo, NY, USA
| | - Hannah Gilmore
- University Hospitals Case Medical Center, Surgical Pathology, Cleveland, OH, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| |
Collapse
|
336
|
Xu J, Luo X, Wang G, Gilmore H, Madabhushi A. A Deep Convolutional Neural Network for segmenting and classifying epithelial and stromal regions in histopathological images. Neurocomputing 2016; 191:214-223. [PMID: 28154470 PMCID: PMC5283391 DOI: 10.1016/j.neucom.2016.01.034] [Citation(s) in RCA: 224] [Impact Index Per Article: 24.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Epithelial (EP) and stromal (ST) are two types of tissues in histological images. Automated segmentation or classification of EP and ST tissues is important when developing computerized system for analyzing the tumor microenvironment. In this paper, a Deep Convolutional Neural Networks (DCNN) based feature learning is presented to automatically segment or classify EP and ST regions from digitized tumor tissue microarrays (TMAs). Current approaches are based on handcraft feature representation, such as color, texture, and Local Binary Patterns (LBP) in classifying two regions. Compared to handcrafted feature based approaches, which involve task dependent representation, DCNN is an end-to-end feature extractor that may be directly learned from the raw pixel intensity value of EP and ST tissues in a data driven fashion. These high-level features contribute to the construction of a supervised classifier for discriminating the two types of tissues. In this work we compare DCNN based models with three handcraft feature extraction based approaches on two different datasets which consist of 157 Hematoxylin and Eosin (H&E) stained images of breast cancer and 1376 immunohistological (IHC) stained images of colorectal cancer, respectively. The DCNN based feature learning approach was shown to have a F1 classification score of 85%, 89%, and 100%, accuracy (ACC) of 84%, 88%, and 100%, and Matthews Correlation Coefficient (MCC) of 86%, 77%, and 100% on two H&E stained (NKI and VGH) and IHC stained data, respectively. Our DNN based approach was shown to outperform three handcraft feature extraction based approaches in terms of the classification of EP and ST regions.
Collapse
Affiliation(s)
- Jun Xu
- Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Xiaofei Luo
- Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Guanhao Wang
- Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Hannah Gilmore
- Institute for Pathology, University Hospitals Case Medical Center, Case Western Reserve University, OH 44106-7207, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, OH 44106, USA
| |
Collapse
|
337
|
Segmenting Brain Tissues from Chinese Visible Human Dataset by Deep-Learned Features with Stacked Autoencoder. BIOMED RESEARCH INTERNATIONAL 2016; 2016:5284586. [PMID: 27057543 PMCID: PMC4807075 DOI: 10.1155/2016/5284586] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2015] [Revised: 12/18/2015] [Accepted: 12/27/2015] [Indexed: 11/17/2022]
Abstract
Cryosection brain images in Chinese Visible Human (CVH) dataset contain rich anatomical structure information of tissues because of its high resolution (e.g., 0.167 mm per pixel). Fast and accurate segmentation of these images into white matter, gray matter, and cerebrospinal fluid plays a critical role in analyzing and measuring the anatomical structures of human brain. However, most existing automated segmentation methods are designed for computed tomography or magnetic resonance imaging data, and they may not be applicable for cryosection images due to the imaging difference. In this paper, we propose a supervised learning-based CVH brain tissues segmentation method that uses stacked autoencoder (SAE) to automatically learn the deep feature representations. Specifically, our model includes two successive parts where two three-layer SAEs take image patches as input to learn the complex anatomical feature representation, and then these features are sent to Softmax classifier for inferring the labels. Experimental results validated the effectiveness of our method and showed that it outperformed four other classical brain tissue detection strategies. Furthermore, we reconstructed three-dimensional surfaces of these tissues, which show their potential in exploring the high-resolution anatomical structures of human brain.
Collapse
|