251
|
Hernandez-Cabronero M, Sanchez V, Blanes I, Auli-Llinas F, Marcellin MW, Serra-Sagrista J. Mosaic-Based Color-Transform Optimization for Lossy and Lossy-to-Lossless Compression of Pathology Whole-Slide Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:21-32. [PMID: 29994394 DOI: 10.1109/tmi.2018.2852685] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The use of whole-slide images (WSIs) in pathology entails stringent storage and transmission requirements because of their huge dimensions. Therefore, image compression is an essential tool to enable efficient access to these data. In particular, color transforms are needed to exploit the very high degree of inter-component correlation and obtain competitive compression performance. Even though the state-of-the-art color transforms remove some redundancy, they disregard important details of the compression algorithm applied after the transform. Therefore, their coding performance is not optimal. We propose an optimization method called mosaic optimization for designing irreversible and reversible color transforms simultaneously optimized for any given WSI and the subsequent compression algorithm. Mosaic optimization is designed to attain reasonable computational complexity and enable continuous scanner operation. Exhaustive experimental results indicate that, for JPEG 2000 at identical compression ratios, the optimized transforms yield images more similar to the original than the other state-of-the-art transforms. Specifically, irreversible optimized transforms outperform the Karhunen-Loève Transform in terms of PSNR (up to 1.1 dB), the HDR-VDP-2 visual distortion metric (up to 3.8 dB), and the accuracy of computer-aided nuclei detection tasks (F1 score up to 0.04 higher). In addition, reversible optimized transforms achieve PSNR, HDR-VDP-2, and nuclei detection accuracy gains of up to 0.9 dB, 7.1 dB, and 0.025, respectively, when compared with the reversible color transform in lossy-to-lossless compression regimes.
Collapse
|
252
|
Kong Y, Gao J, Xu Y, Pan Y, Wang J, Liu J. Classification of autism spectrum disorder by combining brain connectivity and deep neural network classifier. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.04.080] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
253
|
Hipp JD, Johann DJ, Chen Y, Madabhushi A, Monaco J, Cheng J, Rodriguez-Canales J, Stumpe MC, Riedlinger G, Rosenberg AZ, Hanson JC, Kunju LP, Emmert-Buck MR, Balis UJ, Tangrea MA. Computer-Aided Laser Dissection: A Microdissection Workflow Leveraging Image Analysis Tools. J Pathol Inform 2018; 9:45. [PMID: 30622835 PMCID: PMC6298131 DOI: 10.4103/jpi.jpi_60_18] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Accepted: 10/16/2018] [Indexed: 01/05/2023] Open
Abstract
Introduction The development and application of new molecular diagnostic assays based on next-generation sequencing and proteomics require improved methodologies for procurement of target cells from histological sections. Laser microdissection can successfully isolate distinct cells from tissue specimens based on visual selection for many research and clinical applications. However, this can be a daunting task when a large number of cells are required for molecular analysis or when a sizeable number of specimens need to be evaluated. Materials and Methods To improve the efficiency of the cellular identification process, we describe a microdissection workflow that leverages recently developed and open source image analysis algorithms referred to as computer-aided laser dissection (CALD). CALD permits a computer algorithm to identify the cells of interest and drive the dissection process. Results We describe several "use cases" that demonstrate the integration of image analytic tools probabilistic pairwise Markov model, ImageJ, spatially invariant vector quantization (SIVQ), and eSeg onto the ThermoFisher Scientific ArcturusXT and Leica LMD7000 microdissection platforms. Conclusions The CALD methodology demonstrates the integration of image analysis tools with the microdissection workflow and shows the potential impact to clinical and life science applications.
Collapse
Affiliation(s)
- Jason D Hipp
- Laboratory of Pathology, National Cancer Institute, Bethesda, MD, USA.,Google Inc., Mountain View, CA, USA
| | - Donald J Johann
- Winthrop P. Rockefeller Cancer Institute, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Yun Chen
- Laboratory of Pathology, National Cancer Institute, Bethesda, MD, USA.,Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Center for Computational Imaging and Personalized Diagnostics, Case Western Reserve University, Cleveland, OH, USA
| | | | - Jerome Cheng
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | - Jaime Rodriguez-Canales
- Laboratory of Pathology, National Cancer Institute, Bethesda, MD, USA.,Medimmune, LLC, Gaithersburg, MD, USA
| | | | - Greg Riedlinger
- Laboratory of Pathology, National Cancer Institute, Bethesda, MD, USA.,Division of Translational Pathology, Rutgers Cancer Institute of New Jersey, New Brunswick, NJ, USA
| | - Avi Z Rosenberg
- Laboratory of Pathology, National Cancer Institute, Bethesda, MD, USA.,Department of Pathology, Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - Jeffrey C Hanson
- Laboratory of Pathology, National Cancer Institute, Bethesda, MD, USA
| | - Lakshmi P Kunju
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | - Michael R Emmert-Buck
- Laboratory of Pathology, National Cancer Institute, Bethesda, MD, USA.,Avoneaux Medical Institute, LLC, Baltimore, MD, USA
| | - Ulysses J Balis
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | - Michael A Tangrea
- Laboratory of Pathology, National Cancer Institute, Bethesda, MD, USA.,Alvin and Lois Lapidus Cancer Institute, Sinai Hospital of Baltimore, LifeBridge Health, Baltimore, MD, USA
| |
Collapse
|
254
|
Fu B, Liu P, Lin J, Deng L, Hu K, Zheng H. Predicting Invasive Disease-Free Survival for Early-stage Breast Cancer Patients Using Follow-up Clinical Data. IEEE Trans Biomed Eng 2018; 66:2053-2064. [PMID: 30475709 DOI: 10.1109/tbme.2018.2882867] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Chinese women are seriously threatened by breast cancer with high morbidity and mortality. The lack of robust prognosis models results in difficulty for doctors to prepare an appropriate treatment plan that may prolong patient survival time. An alternative prognosis model framework to predict Invasive Disease-Free Survival (iDFS) for early-stage breast cancer patients, called MP4Ei, is proposed. MP4Ei framework gives an excellent performance to predict the relapse or metastasis breast cancer of Chinese patients in 5 years. METHODS MP4Ei is built based on statistical theory and gradient boosting decision tree framework. 5246 patients, derived from the Clinical Research Center for Breast (CRCB) in West China Hospital of Sichuan University, with early-stage (stage I-III) breast cancer are eligible for inclusion. Stratified feature selection, including statistical and ensemble methods, is adopted to select 23 out of the 89 patient features about the patient' demographics, diagnosis, pathology and therapy. Then 23 selected features as the input variables are imported into the XGBoost algorithm, with Bayesian parameter tuning and cross validation, to find out the optimum simplified model for 5-year iDFS prediction. RESULTS For eligible data, with 4196 patients (80%) for training, and with 1050 patients (20%) for testing, MP4Ei achieves comparable accuracy with AUC 0.8451, which has a significant advantage (p < 0.05). CONCLUSION This work demonstrates the complete iDFS prognosis model with very competitive performance. SIGNIFICANCE The proposed method in this paper could be used in clinical practice to predict patients' prognosis and future surviving state, which may help doctors make treatment plan.
Collapse
|
255
|
Zurowietz M, Langenkämper D, Hosking B, Ruhl HA, Nattkemper TW. MAIA-A machine learning assisted image annotation method for environmental monitoring and exploration. PLoS One 2018; 13:e0207498. [PMID: 30444917 PMCID: PMC6239313 DOI: 10.1371/journal.pone.0207498] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Accepted: 10/31/2018] [Indexed: 11/26/2022] Open
Abstract
Digital imaging has become one of the most important techniques in environmental monitoring and exploration. In the case of the marine environment, mobile platforms such as autonomous underwater vehicles (AUVs) are now equipped with high-resolution cameras to capture huge collections of images from the seabed. However, the timely evaluation of all these images presents a bottleneck problem as tens of thousands or more images can be collected during a single dive. This makes computational support for marine image analysis essential. Computer-aided analysis of environmental images (and marine images in particular) with machine learning algorithms is promising, but challenging and different to other imaging domains because training data and class labels cannot be collected as efficiently and comprehensively as in other areas. In this paper, we present Machine learning Assisted Image Annotation (MAIA), a new image annotation method for environmental monitoring and exploration that overcomes the obstacle of missing training data. The method uses a combination of autoencoder networks and Mask Region-based Convolutional Neural Network (Mask R-CNN), which allows human observers to annotate large image collections much faster than before. We evaluated the method with three marine image datasets featuring different types of background, imaging equipment and object classes. Using MAIA, we were able to annotate objects of interest with an average recall of 84.1% more than twice as fast as compared to "traditional" annotation methods, which are purely based on software-supported direct visual inspection and manual annotation. The speed gain increases proportionally with the size of a dataset. The MAIA approach represents a substantial improvement on the path to greater efficiency in the annotation of large benthic image collections.
Collapse
Affiliation(s)
- Martin Zurowietz
- Biodata Mining Group, Faculty of Technology, Bielefeld University, Bielefeld, Germany
| | - Daniel Langenkämper
- Biodata Mining Group, Faculty of Technology, Bielefeld University, Bielefeld, Germany
| | - Brett Hosking
- National Oceanography Centre, University of Southampton Waterfront Campus, Southampton, United Kingdom
| | - Henry A. Ruhl
- National Oceanography Centre, University of Southampton Waterfront Campus, Southampton, United Kingdom
- Monterey Bay Aquarium Research Institute, Moss Landing, California, United States of America
| | - Tim W. Nattkemper
- Biodata Mining Group, Faculty of Technology, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
256
|
Automated soil prediction using bag-of-features and chaotic spider monkey optimization algorithm. EVOLUTIONARY INTELLIGENCE 2018. [DOI: 10.1007/s12065-018-0186-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
257
|
Song TH, Sanchez V, EIDaly H, Rajpoot NM. Simultaneous Cell Detection and Classification in Bone Marrow Histology Images. IEEE J Biomed Health Inform 2018; 23:1469-1476. [PMID: 30387756 DOI: 10.1109/jbhi.2018.2878945] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Recently, deep learning frameworks have been shown to be successful and efficient in processing digital histology images for various detection and classification tasks. Among these tasks, cell detection and classification are key steps in many computer-assisted diagnosis systems. Traditionally, cell detection and classification is performed as a sequence of two consecutive steps by using two separate deep learning networks: one for detection and the other for classification. This strategy inevitably increases the computational complexity of the training stage. In this paper, we propose a synchronized deep autoencoder network for simultaneous detection and classification of cells in bone marrow histology images. The proposed network uses a single architecture to detect the positions of cells and classify the detected cells, in parallel. It uses a curve-support Gaussian model to compute probability maps that allow detecting irregularly shape cells precisely. Moreover, the network includes a novel neighborhood selection mechanism to boost the classification accuracy. We show that the performance of the proposed network is superior than traditional deep learning detection methods and very competitive compared to traditional deep learning classification networks. Runtime comparison also shows that our network requires less time to be trained.
Collapse
|
258
|
Guo J, Yang K, Liu H, Yin C, Xiang J, Li H, Ji R, Gao Y. A Stacked Sparse Autoencoder-Based Detector for Automatic Identification of Neuromagnetic High Frequency Oscillations in Epilepsy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2474-2482. [PMID: 29994761 PMCID: PMC6299455 DOI: 10.1109/tmi.2018.2836965] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
High-frequency oscillations (HFOs) are spontaneous magnetoencephalography (MEG) patterns that have been acknowledged as a putative biomarker to identify epileptic foci. Correct detection of HFOs in the MEG signals is crucial for the accurate and timely clinical evaluation. Since the visual examination of HFOs is time-consuming, error-prone, and with poor inter-reviewer reliability, an automatic HFOs detector is highly desirable in clinical practice. However, the existing approaches for HFOs detection may not be applicable for MEG signals with noisy background activity. Therefore, we employ the stacked sparse autoencoder (SSAE) and propose an SSAE-based MEG HFOs (SMO) detector to facilitate the clinical detection of HFOs. To the best of our knowledge, this is the first attempt to conduct HFOs detection in MEG using deep learning methods. After configuration optimization, our proposed SMO detector is outperformed other classic peer models by achieving 89.9% in accuracy, 88.2% in sensitivity, and 91.6% in specificity. Furthermore, we have tested the performance consistency of our model using various validation schemes. The distribution of performance metrics demonstrates that our model can achieve steady performance.
Collapse
|
259
|
Xiao Y, Wu J, Lin Z, Zhao X. A semi-supervised deep learning method based on stacked sparse auto-encoder for cancer prediction using RNA-seq data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 166:99-105. [PMID: 30415723 DOI: 10.1016/j.cmpb.2018.10.004] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2018] [Revised: 08/25/2018] [Accepted: 10/01/2018] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Cancer has become a complex health problem due to its high mortality. Over the past few decades, with the rapid development of the high-throughput sequencing technology and the application of various machine learning methods, remarkable progress in cancer research has been made based on gene expression data. At the same time, a growing amount of high-dimensional data has been generated, such as RNA-seq data, which calls for superior machine learning methods able to deal with mass data effectively in order to make accurate treatment decision. METHODS In this paper, we present a semi-supervised deep learning strategy, the stacked sparse auto-encoder (SSAE) based classification, for cancer prediction using RNA-seq data. The proposed SSAE based method employs the greedy layer-wise pre-training and a sparsity penalty term to help capture and extract important information from the high-dimensional data and then classify the samples. RESULTS We tested the proposed SSAE model on three public RNA-seq data sets of three types of cancers and compared the prediction performance with several commonly-used classification methods. The results indicate that our approach outperforms the other methods for all the three cancer data sets in various metrics. CONCLUSIONS The proposed SSAE based semi-supervised deep learning model shows its promising ability to process high-dimensional gene expression data and is proved to be effective and accurate for cancer prediction.
Collapse
Affiliation(s)
- Yawen Xiao
- Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing of Ministry of Education, Shanghai 200240, China.
| | - Jun Wu
- The Center for Bioinformatics and Computational Biology, Shanghai Key Laboratory of Regulatory Biology, the Institute of Biomedical Sciences and School of Life Sciences, East China Normal University, Shanghai 200241, China.
| | - Zongli Lin
- Charles L. Brown Department of Electrical and Computer Engineering, University of Virginia, P.O. Box 400743, Charlottesville, VA 22904-4743, USA.
| | - Xiaodong Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China.
| |
Collapse
|
260
|
Dekhil O, Hajjdiab H, Shalaby A, Ali MT, Ayinde B, Switala A, Elshamekh A, Ghazal M, Keynton R, Barnes G, El-Baz A. Using resting state functional MRI to build a personalized autism diagnosis system. PLoS One 2018; 13:e0206351. [PMID: 30379950 PMCID: PMC6209234 DOI: 10.1371/journal.pone.0206351] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Accepted: 10/11/2018] [Indexed: 11/19/2022] Open
Abstract
Autism spectrum disorder (ASD) is a neuro-developmental disorder associated with social impairments, communication difficulties, and restricted and repetitive behaviors. Yet, there is no confirmed cause identified for ASD. Studying the functional connectivity of the brain is an emerging technique used in diagnosing and understanding ASD. In this study, we obtained the resting state functional MRI data of 283 subjects from the National Database of Autism Research (NDAR). An automated autism diagnosis system was built using the data from NDAR. The proposed system is machine learning based. Power spectral densities (PSDs) of time courses corresponding to the spatial activation areas are used as input features, feeds them to a stacked autoencoder then builds a classifier using probabilistic support vector machines. Over the used dataset, around 90% of sensitivity, specificity and accuracy was achieved by our machine learning system. Moreover, the system generalization ability was checked over two different prevalence values, one for the general population and the other for the of high risk population, and the system proved to be very generalizable, especially among the population of high risk. The proposed system generates a full personalized report for each subject, along with identifying the global differences between ASD and typically developed (TD) subjects and its ability to diagnose autism. It shows the impacted areas and the severity of implications. From the clinical aspect, this report is considered very valuable as it helps in both predicting and understanding behavior of autistic subjects. Moreover, it helps in designing a plan for personalized treatment per each individual subject. The proposed work is taking a step towards achieving personalized medicine in autism which is the ultimate goal of our group's research efforts in this area.
Collapse
Affiliation(s)
- Omar Dekhil
- Bioimaging Lab, Bioengineering Department, University of Louisville, Louisville, KY, United States of America
| | - Hassan Hajjdiab
- Department of Electrical and Computer Engineering, Abu Dhabi University, Abu Dhabi, United Arab Emirates
| | - Ahmed Shalaby
- Bioimaging Lab, Bioengineering Department, University of Louisville, Louisville, KY, United States of America
| | - Mohamed T. Ali
- Bioimaging Lab, Bioengineering Department, University of Louisville, Louisville, KY, United States of America
| | - Babajide Ayinde
- Bioimaging Lab, Bioengineering Department, University of Louisville, Louisville, KY, United States of America
| | - Andy Switala
- Bioimaging Lab, Bioengineering Department, University of Louisville, Louisville, KY, United States of America
| | - Aliaa Elshamekh
- Bioimaging Lab, Bioengineering Department, University of Louisville, Louisville, KY, United States of America
| | - Mohamed Ghazal
- Bioimaging Lab, Bioengineering Department, University of Louisville, Louisville, KY, United States of America
- Department of Electrical and Computer Engineering, Abu Dhabi University, Abu Dhabi, United Arab Emirates
| | - Robert Keynton
- Bioengineering Department, University of Louisville, Louisville, KY, United States of America
| | - Gregory Barnes
- Department of Neurology, University of Louisville, Louisville, KY, United States of America
| | - Ayman El-Baz
- Bioimaging Lab, Bioengineering Department, University of Louisville, Louisville, KY, United States of America
| |
Collapse
|
261
|
Albayrak A, Bilgin G. Automatic cell segmentation in histopathological images via two-staged superpixel-based algorithms. Med Biol Eng Comput 2018; 57:653-665. [PMID: 30327998 DOI: 10.1007/s11517-018-1906-0] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Accepted: 09/26/2018] [Indexed: 11/29/2022]
Abstract
The analysis of cell characteristics from high-resolution digital histopathological images is the standard clinical practice for the diagnosis and prognosis of cancer. Yet, it is a rather exhausting process for pathologists to examine the cellular structures manually in this way. Automating this tedious and time-consuming process is an emerging topic of the histopathological image-processing studies in the literature. This paper presents a two-stage segmentation method to obtain cellular structures in high-dimensional histopathological images of renal cell carcinoma. First, the image is segmented to superpixels with simple linear iterative clustering (SLIC) method. Then, the obtained superpixels are clustered by the state-of-the-art clustering-based segmentation algorithms to find similar superpixels that compose the cell nuclei. Furthermore, the comparison of the global clustering-based segmentation methods and local region-based superpixel segmentation algorithms are also compared. The results show that the use of the superpixel segmentation algorithm as a pre-segmentation method improves the performance of the cell segmentation as compared to the simple single clustering-based segmentation algorithm. The true positive ratio (TPR), true negative ratio (TNR), F-measure, precision, and overlap ratio (OR) measures are utilized as segmentation performance evaluation. The computation times of the algorithms are also evaluated and presented in the study. Graphical Abstract The visual flowchart of the proposed automatic cell segmentation in histopathological images via two-staged superpixel-based algorithms.
Collapse
Affiliation(s)
- Abdulkadir Albayrak
- Department of Computer Engineering, Yildiz Technical University (YTU), 34220, Istanbul, Turkey
- Signal and Image Processing Lab. (SIMPLAB) in YTU, 34220, Istanbul, Turkey
| | - Gokhan Bilgin
- Department of Computer Engineering, Yildiz Technical University (YTU), 34220, Istanbul, Turkey.
- Signal and Image Processing Lab. (SIMPLAB) in YTU, 34220, Istanbul, Turkey.
| |
Collapse
|
262
|
Xing F, Xie Y, Su H, Liu F, Yang L. Deep Learning in Microscopy Image Analysis: A Survey. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:4550-4568. [PMID: 29989994 DOI: 10.1109/tnnls.2017.2766168] [Citation(s) in RCA: 168] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Computerized microscopy image analysis plays an important role in computer aided diagnosis and prognosis. Machine learning techniques have powered many aspects of medical investigation and clinical practice. Recently, deep learning is emerging as a leading machine learning tool in computer vision and has attracted considerable attention in biomedical image analysis. In this paper, we provide a snapshot of this fast-growing field, specifically for microscopy image analysis. We briefly introduce the popular deep neural networks and summarize current deep learning achievements in various tasks, such as detection, segmentation, and classification in microscopy image analysis. In particular, we explain the architectures and the principles of convolutional neural networks, fully convolutional networks, recurrent neural networks, stacked autoencoders, and deep belief networks, and interpret their formulations or modelings for specific tasks on various microscopy images. In addition, we discuss the open challenges and the potential trends of future research in microscopy image analysis using deep learning.
Collapse
|
263
|
Höfener H, Homeyer A, Weiss N, Molin J, Lundström CF, Hahn HK. Deep learning nuclei detection: A simple approach can deliver state-of-the-art results. Comput Med Imaging Graph 2018; 70:43-52. [PMID: 30286333 DOI: 10.1016/j.compmedimag.2018.08.010] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2018] [Revised: 07/13/2018] [Accepted: 08/23/2018] [Indexed: 11/28/2022]
Abstract
BACKGROUND Deep convolutional neural networks have become a widespread tool for the detection of nuclei in histopathology images. Many implementations share a basic approach that includes generation of an intermediate map indicating the presence of a nucleus center, which we refer to as PMap. Nevertheless, these implementations often still differ in several parameters, resulting in different detection qualities. METHODS We identified several essential parameters and configured the basic PMap approach using combinations of them. We thoroughly evaluated and compared various configurations on multiple datasets with respect to detection quality, efficiency and training effort. RESULTS Post-processing of the PMap was found to have the largest impact on detection quality. Also, two different network architectures were identified that improve either detection quality or runtime performance. The best-performing configuration yields f1-measures of 0.816 on H&E stained images of colorectal adenocarcinomas and 0.819 on Ki-67 stained images of breast tumor tissue. On average, it was fully trained in less than 15,000 iterations and processed 4.15 megapixels per second at prediction time. CONCLUSIONS The basic PMap approach is greatly affected by certain parameters. Our evaluation provides guidance on their impact and best settings. When configured properly, this simple and efficient approach can yield equal detection quality as more complex and time-consuming state-of-the-art approaches.
Collapse
Affiliation(s)
| | - André Homeyer
- Fraunhofer MEVIS, Am Fallturm 1, 28359, Bremen, Germany.
| | - Nick Weiss
- Fraunhofer MEVIS, Am Fallturm 1, 28359, Bremen, Germany.
| | - Jesper Molin
- Sectra AB, Teknikringen 20, 58330, Linköping, Sweden.
| | - Claes F Lundström
- Sectra AB, Teknikringen 20, 58330, Linköping, Sweden; Center for Medical Image Science and Visualization, Linköping University, 58183, Linköping, Sweden.
| | - Horst K Hahn
- Fraunhofer MEVIS, Am Fallturm 1, 28359, Bremen, Germany; Jacobs University, Campus Ring 1, 28759, Bremen, Germany.
| |
Collapse
|
264
|
Classification of lung adenocarcinoma transcriptome subtypes from pathological images using deep convolutional networks. Int J Comput Assist Radiol Surg 2018; 13:1905-1913. [PMID: 30159833 PMCID: PMC6223755 DOI: 10.1007/s11548-018-1835-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2018] [Accepted: 07/23/2018] [Indexed: 12/12/2022]
Abstract
Purpose Convolutional neural networks have become rapidly popular for image recognition and image analysis because of its powerful potential. In this paper, we developed a method for classifying subtypes of lung adenocarcinoma from pathological images using neural network whose that can evaluate phenotypic features from wider area to consider cellular distributions. Methods In order to recognize the types of tumors, we need not only to detail features of cells, but also to incorporate statistical distribution of the different types of cells. Variants of autoencoders as building blocks of pre-trained convolutional layers of neural networks are implemented. A sparse deep autoencoder which minimizes local information entropy on the encoding layer is then proposed and applied to images of size \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$2048\times 2048$$\end{document}2048×2048. We applied this model for feature extraction from pathological images of lung adenocarcinoma, which is comprised of three transcriptome subtypes previously defined by the Cancer Genome Atlas network. Since the tumor tissue is composed of heterogeneous cell populations, recognition of tumor transcriptome subtypes requires more information than local pattern of cells. The parameters extracted using this approach will then be used in multiple reduction stages to perform classification on larger images. Results We were able to demonstrate that these networks successfully recognize morphological features of lung adenocarcinoma. We also performed classification and reconstruction experiments to compare the outputs of the variants. The results showed that the larger input image that covers a certain area of the tissue is required to recognize transcriptome subtypes. The sparse autoencoder network with \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$2048 \times 2048$$\end{document}2048×2048 input provides a 98.9% classification accuracy. Conclusion This study shows the potential of autoencoders as a feature extraction paradigm and paves the way for a whole slide image analysis tool to predict molecular subtypes of tumors from pathological features.
Collapse
|
265
|
Abraham B, Nair MS. Computer-aided classification of prostate cancer grade groups from MRI images using texture features and stacked sparse autoencoder. Comput Med Imaging Graph 2018; 69:60-68. [PMID: 30205334 DOI: 10.1016/j.compmedimag.2018.08.006] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2017] [Revised: 06/06/2018] [Accepted: 08/22/2018] [Indexed: 12/26/2022]
Abstract
A novel method to determine the Grade Group (GG) in prostate cancer (PCa) using multi-parametric magnetic resonance imaging (mpMRI) biomarkers is investigated in this paper. In this method, high-level features are extracted from hand-crafted texture features using a deep network of stacked sparse autoencoders (SSAE) and classified them using a softmax classifier (SMC). Transaxial T2 Weighted (T2W), Apparent Diffusion Coefficient (ADC) and high B-Value Diffusion-Weighted (BVAL) images obtained from PROSTATEx-2 2017 challenge dataset are used in this technique. The method was evaluated on the challenge dataset composed of a training set of 112 lesions and a test set of 70 lesions. It achieved a quadratic-weighted Kappa score of 0.2772 on evaluation using test dataset of the challenge. It also reached a Positive Predictive Value (PPV) of 80% in predicting PCa with GG > 1. The method achieved first place in the challenge, winning over 43 methods submitted by 21 groups. A 3-fold cross-validation using training data of the challenge was further performed and the method achieved a quadratic-weighted kappa score of 0.2326 and Positive Predictive Value (PPV) of 80.26% in predicting PCa with GG > 1. Even though the training dataset is a highly imbalanced one, the method was able to achieve a fair kappa score. Being one of the pioneer methods which attempted to classify prostate cancer into 5 grade groups from MRI images, it could serve as a base method for further investigations and improvements.
Collapse
Affiliation(s)
- Bejoy Abraham
- Department of Computer Science, University of Kerala, Kariavattom, Thiruvananthapuram 695581, Kerala, India.
| | - Madhu S Nair
- Department of Computer Science, Cochin University of Science and Technology, Kochi 682022, Kerala, India
| |
Collapse
|
266
|
Hu C, Wu XJ, Shu ZQ. Discriminative Feature Learning via Sparse Autoencoders with Label Consistency Constraints. Neural Process Lett 2018. [DOI: 10.1007/s11063-018-9898-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
267
|
Deep Learning and Medical Diagnosis: A Review of Literature. MULTIMODAL TECHNOLOGIES AND INTERACTION 2018. [DOI: 10.3390/mti2030047] [Citation(s) in RCA: 166] [Impact Index Per Article: 23.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
In this review the application of deep learning for medical diagnosis is addressed. A thorough analysis of various scientific articles in the domain of deep neural networks application in the medical field has been conducted. More than 300 research articles were obtained, and after several selection steps, 46 articles were presented in more detail. The results indicate that convolutional neural networks (CNN) are the most widely represented when it comes to deep learning and medical image analysis. Furthermore, based on the findings of this article, it can be noted that the application of deep learning technology is widespread, but the majority of applications are focused on bioinformatics, medical diagnosis and other similar fields.
Collapse
|
268
|
Cerveri P, Belfatto A, Baroni G, Manzotti A. Stacked sparse autoencoder networks and statistical shape models for automatic staging of distal femur trochlear dysplasia. Int J Med Robot 2018; 14:e1947. [PMID: 30073759 DOI: 10.1002/rcs.1947] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Revised: 06/13/2018] [Accepted: 07/10/2018] [Indexed: 01/17/2023]
Abstract
BACKGROUND The quantitative morphological analysis of the trochlear region in the distal femur and the precise staging of the potential dysplastic condition constitute a key point for the use of personalized treatment options for the patella-femoral joint. In this paper, we integrated statistical shape models (SSM), able to represent the individual morphology of the trochlea by means of a set of parameters and stacked sparse autoencoder (SSPA) networks, which exploit the parameters to discriminate among different levels of abnormalities. METHODS Two datasets of distal femur reconstructions were obtained from CT scans, including pathologic and physiologic shapes. Both of them were processed to compute SSM of healthy and dysplastic trochlear regions. The parameters obtained by the 3D-3D reconstruction of a femur shape were fed into a trained SSPA classifier to automatically establish the membership to one of three clinical conditions, namely, healthy, mild dysplasia, and severe dysplasia of the trochlea. The validation was performed on a subset of the shapes not used in the construction of the SSM, by verifying the occurrence of a correct classification. RESULTS A major finding of the work is that SSM are able to represent anomalies of the trochlear geometry by means of specific eigenmodes of variation and to model the interplay between morphologic features related to dysplasia. Exploiting the patient-specific morphing parameters of SSM, computed by means of a 3D-3D reconstruction, SSPA is demonstrated to outperform traditional discriminant analysis in classifying healthy, mild, and severe trochlear dysplasia providing 99%, 97%, and 98% accuracy for each of the three classes, respectively (discriminant analysis accuracy: 85%, 89%, and 77%). CONCLUSIONS From a clinical point of view, this paper contributes to support the increasing role of SSM, integrated with deep learning techniques, in diagnostics and therapy definition as quantitative and advanced visualization tools.
Collapse
Affiliation(s)
- Pietro Cerveri
- Department of Electronics, Information and Bioengineering, Politecnico di Milano University, Milan, Italy
| | - Antonella Belfatto
- Department of Electronics, Information and Bioengineering, Politecnico di Milano University, Milan, Italy
| | - Guido Baroni
- Department of Electronics, Information and Bioengineering, Politecnico di Milano University, Milan, Italy
| | - Alfonso Manzotti
- Orthopaedic and Trauma Department, "Luigi Sacco" Hospital, ASST FBF-Sacco, Milan, Italy
| |
Collapse
|
269
|
Guo Y, Jiao L, Wang S, Wang S, Liu F. Fuzzy Sparse Autoencoder Framework for Single Image Per Person Face Recognition. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:2402-2415. [PMID: 28858822 DOI: 10.1109/tcyb.2017.2739338] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The issue of single sample per person (SSPP) face recognition has attracted more and more attention in recent years. Patch/local-based algorithm is one of the most popular categories to address the issue, as patch/local features are robust to face image variations. However, the global discriminative information is ignored in patch/local-based algorithm, which is crucial to recognize the nondiscriminative region of face images. To make the best of the advantage of both local information and global information, a novel two-layer local-to-global feature learning framework is proposed to address SSPP face recognition. In the first layer, the objective-oriented local features are learned by a patch-based fuzzy rough set feature selection strategy. The obtained local features are not only robust to the image variations, but also usable to preserve the discrimination ability of original patches. Global structural information is extracted from local features by a sparse autoencoder in the second layer, which reduces the negative effect of nondiscriminative regions. Besides, the proposed framework is a shallow network, which avoids the over-fitting caused by using multilayer network to address SSPP problem. The experimental results have shown that the proposed local-to-global feature learning framework can achieve superior performance than other state-of-the-art feature learning algorithms for SSPP face recognition.
Collapse
|
270
|
Hu B, Tang Y, Chang EIC, Fan Y, Lai M, Xu Y. Unsupervised Learning for Cell-Level Visual Representation in Histopathology Images With Generative Adversarial Networks. IEEE J Biomed Health Inform 2018; 23:1316-1328. [PMID: 29994411 DOI: 10.1109/jbhi.2018.2852639] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The visual attributes of cells, such as the nuclear morphology and chromatin openness, are critical for histopathology image analysis. By learning cell-level visual representation, we can obtain a rich mix of features that are highly reusable for various tasks, such as cell-level classification, nuclei segmentation, and cell counting. In this paper, we propose a unified generative adversarial networks architecture with a new formulation of loss to perform robust cell-level visual representation learning in an unsupervised setting. Our model is not only label-free and easily trained but also capable of cell-level unsupervised classification with interpretable visualization, which achieves promising results in the unsupervised classification of bone marrow cellular components. Based on the proposed cell-level visual representation learning, we further develop a pipeline that exploits the varieties of cellular elements to perform histopathology image classification, the advantages of which are demonstrated on bone marrow datasets.
Collapse
|
271
|
Xiang L, Wang Q, Nie D, Zhang L, Jin X, Qiao Y, Shen D. Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image. Med Image Anal 2018; 47:31-44. [PMID: 29674235 PMCID: PMC6410565 DOI: 10.1016/j.media.2018.03.011] [Citation(s) in RCA: 112] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Revised: 03/17/2018] [Accepted: 03/26/2018] [Indexed: 02/01/2023]
Abstract
Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1-weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR-to-CT synthesis task by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis result back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeating this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate imaging datasets, by also comparing with the state-of-the-art methods. Experimental results suggest that our DECNN (with repeated embedding operations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image.
Collapse
Affiliation(s)
- Lei Xiang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Qian Wang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China.
| | - Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Lichi Zhang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Xiyao Jin
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Yu Qiao
- Shenzhen Key Lab of Computer Vision & Pattern Recognition, Shenzhen Institutes of Advanced Technology, CAS, Shenzhen, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
272
|
Bhowmik MK, Gogoi UR, Majumdar G, Bhattacharjee D, Datta D, Ghosh AK. Designing of Ground-Truth-Annotated DBT-TU-JU Breast Thermogram Database Toward Early Abnormality Prediction. IEEE J Biomed Health Inform 2018; 22:1238-1249. [DOI: 10.1109/jbhi.2017.2740500] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
273
|
Deep learning in mammography and breast histology, an overview and future trends. Med Image Anal 2018; 47:45-67. [DOI: 10.1016/j.media.2018.03.006] [Citation(s) in RCA: 160] [Impact Index Per Article: 22.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Revised: 01/03/2018] [Accepted: 03/14/2018] [Indexed: 12/20/2022]
|
274
|
Kim J, Hong J, Park H. Prospects of deep learning for medical imaging. PRECISION AND FUTURE MEDICINE 2018; 2:37-52. [DOI: 10.23838/pfm.2018.00030] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2018] [Accepted: 04/14/2018] [Indexed: 08/29/2023] Open
|
275
|
Yang Y, Wu Z, Xu Q, Yan F. Deep Learning Technique-Based Steering of Autonomous Car. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS 2018. [DOI: 10.1142/s1469026818500062] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Deep neural network (DNN) has many advantages. Autonomous driving has become a popular topic now. In this paper, an improved stack autoencoder based on the deep learning techniques is proposed to learn the driving characteristics of an autonomous car. These techniques realize the input data adjustment and solving diffusion gradient problem. A Raspberry Pi and a camera module are mounted on the top of the car. The camera module provides the images needed for training the DNN. There are two stages in the training. In the pre-training process, an improved autoencoder is trained by the unsupervised learning mechanism, and the characterization of the track is extracted. In the fine-tuning stage, the whole network is trained according to the labeled data, and then this model learns the driving characteristics better according to the samples. In the experimental stage, the car will predict the action of the car by the trained model in the autonomous mode. The experiment exhibits the effectiveness of the proposed model. Compared with the traditional neural network, the improved stack autoencoder has a better generalization ability and faster convergence speed.
Collapse
Affiliation(s)
- Yiqin Yang
- School of Mechanical, Electrical & Information Engineering, Shandong University, 180 Wenhua Xilu, Weihai, Shandong 264209, China
| | - Zhe Wu
- School of Mechanical, Electrical & Information Engineering, Shandong University, 180 Wenhua Xilu, Weihai, Shandong 264209, China
| | - Qingyang Xu
- School of Mechanical, Electrical & Information Engineering, Shandong University, 180 Wenhua Xilu, Weihai, Shandong 264209, China
| | - Fabao Yan
- School of Mechanical, Electrical & Information Engineering, Shandong University, 180 Wenhua Xilu, Weihai, Shandong 264209, China
| |
Collapse
|
276
|
Salvi M, Molinari F. Multi-tissue and multi-scale approach for nuclei segmentation in H&E stained images. Biomed Eng Online 2018; 17:89. [PMID: 29925379 PMCID: PMC6011253 DOI: 10.1186/s12938-018-0518-0] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Accepted: 06/12/2018] [Indexed: 02/04/2023] Open
Abstract
Background Accurate nuclei detection and segmentation in histological images is essential for many clinical purposes. While manual annotations are time-consuming and operator-dependent, full automated segmentation remains a challenging task due to the high variability of cells intensity, size and morphology. Most of the proposed algorithms for the automated segmentation of nuclei were designed for specific organ or tissues. Results The aim of this study was to develop and validate a fully multiscale method, named MANA (Multiscale Adaptive Nuclei Analysis), for nuclei segmentation in different tissues and magnifications. MANA was tested on a dataset of H&E stained tissue images with more than 59,000 annotated nuclei, taken from six organs (colon, liver, bone, prostate, adrenal gland and thyroid) and three magnifications (10×, 20×, 40×). Automatic results were compared with manual segmentations and three open-source software designed for nuclei detection. For each organ, MANA obtained always an F1-score higher than 0.91, with an average F1 of 0.9305 ± 0.0161. The average computational time was about 20 s independently of the number of nuclei to be detected (anyway, higher than 1000), indicating the efficiency of the proposed technique. Conclusion To the best of our knowledge, MANA is the first fully automated multi-scale and multi-tissue algorithm for nuclei detection. Overall, the robustness and versatility of MANA allowed to achieve, on different organs and magnifications, performances in line or better than those of state-of-art algorithms optimized for single tissues.
Collapse
Affiliation(s)
- Massimo Salvi
- Biolab, Department of Electronics and Telecomunications, Politecnico di Torino, 10129, Turin, Italy.
| | - Filippo Molinari
- Biolab, Department of Electronics and Telecomunications, Politecnico di Torino, 10129, Turin, Italy
| |
Collapse
|
277
|
Histopathological image classification with bilinear convolutional neural networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2017:4050-4053. [PMID: 29060786 DOI: 10.1109/embc.2017.8037745] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The computer-aided quantitative analysis for histopathological images has attracted considerable attention. The stain decomposition on histopathological images is usually recommended to address the issue of co-localization or aliasing of tissue substances. Although the convolutional neural networks (CNN) is a popular deep learning algorithm for various tasks on histopathological image analysis, it is only directly performed on histopathological images without considering stain decomposition. The bilinear CNN (BCNN) is a new CNN model for fine-grained classification. BCNN consists of two CNNs, whose convolutional-layer outputs are multiplied with outer product at each spatial location. In this work, we propose a novel BCNN-based method for classification of histopathological images, which first decomposes histopathological images into hematoxylin and eosin stain components, and then perform BCNN on the decomposed images to fuse and improve the feature representation performance. The experimental results on the colorectal cancer histopathological image dataset with eight classes indicate that the proposed BCNN-based algorithm is superior to the traditional CNN.
Collapse
|
278
|
Deep Convolutional Autoencoders vs PCA in a Highly-Unbalanced Parkinson’s Disease Dataset: A DaTSCAN Study. ACTA ACUST UNITED AC 2018. [DOI: 10.1007/978-3-319-94120-2_5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
|
279
|
Mahmud M, Kaiser MS, Hussain A, Vassanelli S. Applications of Deep Learning and Reinforcement Learning to Biological Data. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:2063-2079. [PMID: 29771663 DOI: 10.1109/tnnls.2018.2790388] [Citation(s) in RCA: 240] [Impact Index Per Article: 34.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Rapid advances in hardware-based technologies during the past decades have opened up new possibilities for life scientists to gather multimodal data in various application domains, such as omics, bioimaging, medical imaging, and (brain/body)-machine interfaces. These have generated novel opportunities for development of dedicated data-intensive machine learning techniques. In particular, recent research in deep learning (DL), reinforcement learning (RL), and their combination (deep RL) promise to revolutionize the future of artificial intelligence. The growth in computational power accompanied by faster and increased data storage, and declining computing costs have already allowed scientists in various fields to apply these techniques on data sets that were previously intractable owing to their size and complexity. This paper provides a comprehensive survey on the application of DL, RL, and deep RL techniques in mining biological data. In addition, we compare the performances of DL techniques when applied to different data sets across various application domains. Finally, we outline open issues in this challenging research area and discuss future development perspectives.
Collapse
|
280
|
Chen H, Zhang Y, Chen Y, Zhang J, Zhang W, Sun H, Lv Y, Liao P, Zhou J, Wang G. LEARN: Learned Experts' Assessment-Based Reconstruction Network for Sparse-Data CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1333-1347. [PMID: 29870363 PMCID: PMC6019143 DOI: 10.1109/tmi.2018.2805692] [Citation(s) in RCA: 184] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Compressive sensing (CS) has proved effective for tomographic reconstruction from sparsely collected data or under-sampled measurements, which are practically important for few-view computed tomography (CT), tomosynthesis, interior tomography, and so on. To perform sparse-data CT, the iterative reconstruction commonly uses regularizers in the CS framework. Currently, how to choose the parameters adaptively for regularization is a major open problem. In this paper, inspired by the idea of machine learning especially deep learning, we unfold the state-of-the-art "fields of experts"-based iterative reconstruction scheme up to a number of iterations for data-driven training, construct a learned experts' assessment-based reconstruction network (LEARN) for sparse-data CT, and demonstrate the feasibility and merits of our LEARN network. The experimental results with our proposed LEARN network produces a superior performance with the well-known Mayo Clinic low-dose challenge data set relative to the several state-of-the-art methods, in terms of artifact reduction, feature preservation, and computational speed. This is consistent to our insight that because all the regularization terms and parameters used in the iterative reconstruction are now learned from the training data, our LEARN network utilizes application-oriented knowledge more effectively and recovers underlying images more favorably than competing algorithms. Also, the number of layers in the LEARN network is only 50, reducing the computational complexity of typical iterative algorithms by orders of magnitude.
Collapse
Affiliation(s)
- Hu Chen
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Yi Zhang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | | | - Junfeng Zhang
- School of Computer and Information Engineering, Henan University of Economics and Law, Zhengzhou 450046, China
| | - Weihua Zhang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Huaiqiang Sun
- Department of Radiology, West China Hospital of Sichuan University, Chengdu 610041, China
| | - Yang Lv
- Shanghai United Imaging Healthcare Co., Ltd, Shanghai 210807, China.
| | - Peixi Liao
- Department of Scientific Research and Education, The Sixth People’s Hospital of Chengdu, Chengdu 610065, China
| | - Jiliu Zhou
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| |
Collapse
|
281
|
Wang YB, You ZH, Li LP, Huang DS, Zhou FF, Yang S. Improving Prediction of Self-interacting Proteins Using Stacked Sparse Auto-Encoder with PSSM profiles. Int J Biol Sci 2018; 14:983-991. [PMID: 29989064 PMCID: PMC6036743 DOI: 10.7150/ijbs.23817] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Accepted: 03/29/2018] [Indexed: 12/05/2022] Open
Abstract
Self-interacting proteins (SIPs) play a significant role in the execution of most important molecular processes in cells, such as signal transduction, gene expression regulation, immune response and enzyme activation. Although the traditional experimental methods can be used to generate SIPs data, it is very expensive and time-consuming based only on biological technique. Therefore, it is important and urgent to develop an efficient computational method for SIPs detection. In this study, we present a novel SIPs identification method based on machine learning technology by combing the Zernike Moments (ZMs) descriptor on Position Specific Scoring Matrix (PSSM) with Probabilistic Classification Vector Machines (PCVM) and Stacked Sparse Auto-Encoder (SSAE). More specifically, an efficient feature extraction technique called ZMs is firstly utilized to generate feature vectors on Position Specific Scoring Matrix (PSSM); Then, Deep neural network is employed for reducing the feature dimensions and noise; Finally, the Probabilistic Classification Vector Machine is used to execute the classification. The prediction performance of the proposed method is evaluated on S.erevisiae and Human SIPs datasets via cross-validation. The experimental results indicate that the proposed method can achieve good accuracies of 92.55% and 97.47%, respectively. To further evaluate the advantage of our scheme for SIPs prediction, we also compared the PCVM classifier with the Support Vector Machine (SVM) and other existing techniques on the same data sets. Comparison results reveal that the proposed strategy is outperforms other methods and could be a used tool for identifying SIPs.
Collapse
Affiliation(s)
- Yan-Bin Wang
- University of Chinese Academy of Sciences, Beijing 100049, China
- Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Science, Urumqi 830011, China
| | - Zhu-Hong You
- Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Science, Urumqi 830011, China
| | - Li-Ping Li
- Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Science, Urumqi 830011, China
| | - De-Shuang Huang
- Institute of Machine Learning and Systems Biology, School of Electronics and Information Engineering, Tongji University, Caoan Road 4800, Shanghai 201804, China
| | - Feng-Feng Zhou
- College of Computer Science and Technology, and Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin 130012, China
| | - Shan Yang
- Xinjiang Technical Institute of Physics and Chemistry, Chinese Academy of Science, Urumqi 830011, China
| |
Collapse
|
282
|
Saha M, Chakraborty C. Her2Net: A Deep Framework for Semantic Segmentation and Classification of Cell Membranes and Nuclei in Breast Cancer Evaluation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:2189-2200. [PMID: 29432100 DOI: 10.1109/tip.2018.2795742] [Citation(s) in RCA: 64] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present an efficient deep learning framework for identifying, segmenting, and classifying cell membranes and nuclei from human epidermal growth factor receptor-2 (HER2)-stained breast cancer images with minimal user intervention. This is a long-standing issue for pathologists because the manual quantification of HER2 is error-prone, costly, and time-consuming. Hence, we propose a deep learning-based HER2 deep neural network (Her2Net) to solve this issue. The convolutional and deconvolutional parts of the proposed Her2Net framework consisted mainly of multiple convolution layers, max-pooling layers, spatial pyramid pooling layers, deconvolution layers, up-sampling layers, and trapezoidal long short-term memory (TLSTM). A fully connected layer and a softmax layer were also used for classification and error estimation. Finally, HER2 scores were calculated based on the classification results. The main contribution of our proposed Her2Net framework includes the implementation of TLSTM and a deep learning framework for cell membrane and nucleus detection, segmentation, and classification and HER2 scoring. Our proposed Her2Net achieved 96.64% precision, 96.79% recall, 96.71% F-score, 93.08% negative predictive value, 98.33% accuracy, and a 6.84% false-positive rate. Our results demonstrate the high accuracy and wide applicability of the proposed Her2Net in the context of HER2 scoring for breast cancer evaluation.
Collapse
|
283
|
López-Linares K, Aranjuelo N, Kabongo L, Maclair G, Lete N, Ceresa M, García-Familiar A, Macía I, González Ballester MA. Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks. Med Image Anal 2018; 46:202-214. [DOI: 10.1016/j.media.2018.03.010] [Citation(s) in RCA: 76] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Revised: 03/19/2018] [Accepted: 03/21/2018] [Indexed: 12/15/2022]
|
284
|
Lv J, Chen K, Yang M, Zhang J, Wang X. Reconstruction of undersampled radial free-breathing 3D abdominal MRI using stacked convolutional auto-encoders. Med Phys 2018; 45:2023-2032. [PMID: 29574939 DOI: 10.1002/mp.12870] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2017] [Revised: 02/21/2018] [Accepted: 03/06/2018] [Indexed: 01/22/2023] Open
Abstract
PURPOSE Free-breathing three-dimensional (3D) abdominal imaging is a challenging task for MRI, as respiratory motion severely degrades image quality. One of the most promising self-navigation techniques is the 3D golden-angle radial stack-of-stars (SOS) sequence, which has advantages in terms of speed, resolution, and allowing free breathing. However, streaking artifacts are still clearly observed in reconstructed images when undersampling is applied. This work presents a novel reconstruction approach based on a stacked convolutional auto-encoder (SCAE) network to solve this problem. METHODS Thirty healthy volunteers participated in our experiment. To build the dataset, reference and artifact-affected images were reconstructed using 451 golden-angle spokes and the first 20, 40, or 90 golden-angle spokes corresponding to acceleration rates of 31.4, 15.7, and 6.98, respectively. In the training step, we trained the SCAE by feeding it with patches from artifact-affected images. The SCAE outputs patches in the corresponding reference images. In the testing step, we applied the trained SCAE to map each input artifact-affected patch to the corresponding reference image patch. RESULT The SCAE-based reconstruction images with acceleration rates of 6.98 and 15.7 show nearly similar quality as the reference images. Additionally, the calculation time is below 1 s. Moreover, the proposed approach preserves important features, such as lesions not presented in the training set. CONCLUSION The preliminary results demonstrate the feasibility of the proposed SCAE-based strategy for correcting the streaking artifacts of undersampled free-breathing 3D abdominal MRI with a negligible reconstruction time.
Collapse
Affiliation(s)
- Jun Lv
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871, China
| | - Kun Chen
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871, China
| | - Ming Yang
- Vusion Tech Ltd. Co, Hefei, 230031, China
| | - Jue Zhang
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871, China.,College of Engineering, Peking University, Beijing, 100871, China
| | - Xiaoying Wang
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871, China.,Department of Radiology, Peking University First Hospital, Beijing, 100034, China
| |
Collapse
|
285
|
Wang YB, You ZH, Li X, Jiang TH, Chen X, Zhou X, Wang L. Predicting protein-protein interactions from protein sequences by a stacked sparse autoencoder deep neural network. MOLECULAR BIOSYSTEMS 2018; 13:1336-1344. [PMID: 28604872 DOI: 10.1039/c7mb00188f] [Citation(s) in RCA: 68] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Protein-protein interactions (PPIs) play an important role in most of the biological processes. How to correctly and efficiently detect protein interaction is a problem that is worth studying. Although high-throughput technologies provide the possibility to detect large-scale PPIs, these cannot be used to detect whole PPIs, and unreliable data may be generated. To solve this problem, in this study, a novel computational method was proposed to effectively predict the PPIs using the information of a protein sequence. The present method adopts Zernike moments to extract the protein sequence feature from a position specific scoring matrix (PSSM). Then, these extracted features were reconstructed using the stacked autoencoder. Finally, a novel probabilistic classification vector machine (PCVM) classifier was employed to predict the protein-protein interactions. When performed on the PPIs datasets of Yeast and H. pylori, the proposed method could achieve average accuracies of 96.60% and 91.19%, respectively. The promising result shows that the proposed method has a better ability to detect PPIs than other detection methods. The proposed method was also applied to predict PPIs on other species, and promising results were obtained. To evaluate the ability of our method, we compared it with the-state-of-the-art support vector machine (SVM) classifier for the Yeast dataset. The results obtained via multiple experiments prove that our method is powerful, efficient, feasible, and make a great contribution to proteomics research.
Collapse
Affiliation(s)
- Yan-Bin Wang
- Xinjiang Technical Institutes of Physics and Chemistry, Chinese Academy of Science, Urumqi 830011, China.
| | | | | | | | | | | | | |
Collapse
|
286
|
Işil Ç, Yorulmaz M, Solmaz B, Turhan AB, Yurdakul C, Ünlü S, Ozbay E, Koç A. Resolution enhancement of wide-field interferometric microscopy by coupled deep autoencoders. APPLIED OPTICS 2018; 57:2545-2552. [PMID: 29714238 DOI: 10.1364/ao.57.002545] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Accepted: 03/01/2018] [Indexed: 06/08/2023]
Abstract
Wide-field interferometric microscopy is a highly sensitive, label-free, and low-cost biosensing imaging technique capable of visualizing individual biological nanoparticles such as viral pathogens and exosomes. However, further resolution enhancement is necessary to increase detection and classification accuracy of subdiffraction-limited nanoparticles. In this study, we propose a deep-learning approach, based on coupled deep autoencoders, to improve resolution of images of L-shaped nanostructures. During training, our method utilizes microscope image patches and their corresponding manual truth image patches in order to learn the transformation between them. Following training, the designed network reconstructs denoised and resolution-enhanced image patches for unseen input.
Collapse
|
287
|
Robertson S, Azizpour H, Smith K, Hartman J. Digital image analysis in breast pathology-from image processing techniques to artificial intelligence. Transl Res 2018; 194:19-35. [PMID: 29175265 DOI: 10.1016/j.trsl.2017.10.010] [Citation(s) in RCA: 128] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/01/2017] [Revised: 10/28/2017] [Accepted: 10/30/2017] [Indexed: 01/04/2023]
Abstract
Breast cancer is the most common malignant disease in women worldwide. In recent decades, earlier diagnosis and better adjuvant therapy have substantially improved patient outcome. Diagnosis by histopathology has proven to be instrumental to guide breast cancer treatment, but new challenges have emerged as our increasing understanding of cancer over the years has revealed its complex nature. As patient demand for personalized breast cancer therapy grows, we face an urgent need for more precise biomarker assessment and more accurate histopathologic breast cancer diagnosis to make better therapy decisions. The digitization of pathology data has opened the door to faster, more reproducible, and more precise diagnoses through computerized image analysis. Software to assist diagnostic breast pathology through image processing techniques have been around for years. But recent breakthroughs in artificial intelligence (AI) promise to fundamentally change the way we detect and treat breast cancer in the near future. Machine learning, a subfield of AI that applies statistical methods to learn from data, has seen an explosion of interest in recent years because of its ability to recognize patterns in data with less need for human instruction. One technique in particular, known as deep learning, has produced groundbreaking results in many important problems including image classification and speech recognition. In this review, we will cover the use of AI and deep learning in diagnostic breast pathology, and other recent developments in digital image analysis.
Collapse
Affiliation(s)
- Stephanie Robertson
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden; Department of Clinical Pathology and Cytology, Karolinska University Laboratory, Stockholm, Sweden
| | - Hossein Azizpour
- School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden; Science for Life Laboratory, Stockholm, Sweden
| | - Kevin Smith
- School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden; Science for Life Laboratory, Stockholm, Sweden
| | - Johan Hartman
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden; Department of Clinical Pathology and Cytology, Karolinska University Laboratory, Stockholm, Sweden; Stockholm South General Hospital, Stockholm, Sweden.
| |
Collapse
|
288
|
|
289
|
Sornapudi S, Stanley RJ, Stoecker WV, Almubarak H, Long R, Antani S, Thoma G, Zuna R, Frazier SR. Deep Learning Nuclei Detection in Digitized Histology Images by Superpixels. J Pathol Inform 2018; 9:5. [PMID: 29619277 PMCID: PMC5869967 DOI: 10.4103/jpi.jpi_74_17] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Accepted: 01/17/2018] [Indexed: 01/08/2023] Open
Abstract
Background Advances in image analysis and computational techniques have facilitated automatic detection of critical features in histopathology images. Detection of nuclei is critical for squamous epithelium cervical intraepithelial neoplasia (CIN) classification into normal, CIN1, CIN2, and CIN3 grades. Methods In this study, a deep learning (DL)-based nuclei segmentation approach is investigated based on gathering localized information through the generation of superpixels using a simple linear iterative clustering algorithm and training with a convolutional neural network. Results The proposed approach was evaluated on a dataset of 133 digitized histology images and achieved an overall nuclei detection (object-based) accuracy of 95.97%, with demonstrated improvement over imaging-based and clustering-based benchmark techniques. Conclusions The proposed DL-based nuclei segmentation Method with superpixel analysis has shown improved segmentation results in comparison to state-of-the-art methods.
Collapse
Affiliation(s)
- Sudhir Sornapudi
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, USA
| | - Ronald Joe Stanley
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, USA
| | | | - Haidar Almubarak
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, USA
| | - Rodney Long
- DHHS, Lister Hill National Center for Biomedical Communications for National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Sameer Antani
- DHHS, Lister Hill National Center for Biomedical Communications for National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - George Thoma
- DHHS, Lister Hill National Center for Biomedical Communications for National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Rosemary Zuna
- Department of Pathology, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA
| | - Shelliane R Frazier
- Department of Surgical Pathology, University of Missouri Hospitals and Clinics, Columbia, USA
| |
Collapse
|
290
|
Saha M, Chakraborty C, Racoceanu D. Efficient deep learning model for mitosis detection using breast histopathology images. Comput Med Imaging Graph 2018; 64:29-40. [PMID: 29409716 DOI: 10.1016/j.compmedimag.2017.12.001] [Citation(s) in RCA: 75] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2017] [Revised: 06/28/2017] [Accepted: 12/07/2017] [Indexed: 01/18/2023]
Abstract
Mitosis detection is one of the critical factors of cancer prognosis, carrying significant diagnostic information required for breast cancer grading. It provides vital clues to estimate the aggressiveness and the proliferation rate of the tumour. The manual mitosis quantification from whole slide images is a very labor-intensive and challenging task. The aim of this study is to propose a supervised model to detect mitosis signature from breast histopathology WSI images. The model has been designed using deep learning architecture with handcrafted features. We used handcrafted features issued from previous medical challenges MITOS @ ICPR 2012, AMIDA-13 and projects (MICO ANR TecSan) expertise. The deep learning architecture mainly consists of five convolution layers, four max-pooling layers, four rectified linear units (ReLU), and two fully connected layers. ReLU has been used after each convolution layer as an activation function. Dropout layer has been included after first fully connected layer to avoid overfitting. Handcrafted features mainly consist of morphological, textural and intensity features. The proposed architecture has shown to have an improved 92% precision, 88% recall and 90% F-score. Prospectively, the proposed model will be very beneficial in routine exam, providing pathologists with efficient and - as we will prove - effective second opinion for breast cancer grading from whole slide images. Last but not the least, this model could lead junior and senior pathologists, as medical researchers, to a superior understanding and evaluation of breast cancer stage and genesis.
Collapse
Affiliation(s)
- Monjoy Saha
- School of Medical Science and Technology, Indian Institute of Technology, Kharagpur, West Bengal, India
| | - Chandan Chakraborty
- School of Medical Science and Technology, Indian Institute of Technology, Kharagpur, West Bengal, India
| | - Daniel Racoceanu
- Sorbonne University, Paris, France; Pontifical Catholic University of Peru, Lima, Peru.
| |
Collapse
|
291
|
Liu C, Huang Y, Ozolek JA, Hanna MG, Singh R, Rohde GK. SetSVM: An Approach to Set Classification in Nuclei-Based Cancer Detection. IEEE J Biomed Health Inform 2018; 23:351-361. [PMID: 29994380 DOI: 10.1109/jbhi.2018.2803793] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Due to the importance of nuclear structure in cancer diagnosis, several predictive models have been described for diagnosing a wide variety of cancers based on nuclear morphology. In many computer-aided diagnosis (CAD) systems, cancer detection tasks can be generally formulated as set classification problems, which can not be directly solved by classifying single instances. In this paper, we propose a novel set classification approach SetSVM to build a predictive model by considering any nuclei set as a whole without specific assumptions. SetSVM features highly discriminative power in cancer detection challenges in the sense that it not only optimizes the classifier decision boundary but also transfers discriminative information to set representation learning. During model training, these two processes are unified in the support vector machine (SVM) maximum separation margin problem. Experiment results show that SetSVM provides significant improvements compared with five commonly used approaches in cancer detection tasks utilizing 260 patients in total across three different cancer types, namely, thyroid cancer, liver cancer, and melanoma. In addition, we show that SetSVM enables visual interpretation of discriminative nuclear characteristics representing the nuclei set. These features make SetSVM a potentially practical tool in building accurate and interpretable CAD systems for cancer detection.
Collapse
|
292
|
Xie Y, Xing F, Shi X, Kong X, Su H, Yang L. Efficient and robust cell detection: A structured regression approach. Med Image Anal 2018; 44:245-254. [PMID: 28797548 PMCID: PMC6051760 DOI: 10.1016/j.media.2017.07.003] [Citation(s) in RCA: 61] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2016] [Revised: 02/22/2017] [Accepted: 07/21/2017] [Indexed: 10/19/2022]
Abstract
Efficient and robust cell detection serves as a critical prerequisite for many subsequent biomedical image analysis methods and computer-aided diagnosis (CAD). It remains a challenging task due to touching cells, inhomogeneous background noise, and large variations in cell sizes and shapes. In addition, the ever-increasing amount of available datasets and the high resolution of whole-slice scanned images pose a further demand for efficient processing algorithms. In this paper, we present a novel structured regression model based on a proposed fully residual convolutional neural network for efficient cell detection. For each testing image, our model learns to produce a dense proximity map that exhibits higher responses at locations near cell centers. Our method only requires a few training images with weak annotations (just one dot indicating the cell centroids). We have extensively evaluated our method using four different datasets, covering different microscopy staining methods (e.g., H & E or Ki-67 staining) or image acquisition techniques (e.g., bright-filed image or phase contrast). Experimental results demonstrate the superiority of our method over existing state of the art methods in terms of both detection accuracy and running time.
Collapse
Affiliation(s)
- Yuanpu Xie
- Department of Biomedical Engineering, University of Florida, FL 32611 USA.
| | - Fuyong Xing
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32611, USA
| | - Xiaoshuang Shi
- Department of Biomedical Engineering, University of Florida, FL 32611 USA
| | - Xiangfei Kong
- School of Electrical and Electronic Engineering, Nanyang Technological University, Nanyang Drive 637553 Singapore
| | - Hai Su
- Department of Biomedical Engineering, University of Florida, FL 32611 USA
| | - Lin Yang
- Department of Biomedical Engineering, University of Florida, FL 32611 USA; Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32611, USA.
| |
Collapse
|
293
|
A Computer-Aided Decision Support System for Detection and Localization of Cutaneous Vasculature in Dermoscopy Images Via Deep Feature Learning. J Med Syst 2018; 42:33. [DOI: 10.1007/s10916-017-0885-2] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Accepted: 12/18/2017] [Indexed: 01/03/2023]
|
294
|
|
295
|
Assessment of Breast Cancer Histology Using Densely Connected Convolutional Networks. LECTURE NOTES IN COMPUTER SCIENCE 2018. [DOI: 10.1007/978-3-319-93000-8_103] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
296
|
Wu Q, Boueiz A, Bozkurt A, Masoomi A, Wang A, DeMeo DL, Weiss ST, Qiu W. Deep Learning Methods for Predicting Disease Status Using Genomic Data. JOURNAL OF BIOMETRICS & BIOSTATISTICS 2018; 9:417. [PMID: 31131151 PMCID: PMC6530791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Predicting disease status for a complex human disease using genomic data is an important, yet challenging, step in personalized medicine. Among many challenges, the so-called curse of dimensionality problem results in unsatisfied performances of many state-of-art machine learning algorithms. A major recent advance in machine learning is the rapid development of deep learning algorithms that can efficiently extract meaningful features from high-dimensional and complex datasets through a stacked and hierarchical learning process. Deep learning has shown breakthrough performance in several areas including image recognition, natural language processing, and speech recognition. However, the performance of deep learning in predicting disease status using genomic datasets is still not well studied. In this article, we performed a review on the four relevant articles that we found through our thorough literature search. All four articles first used auto-encoders to project high-dimensional genomic data to a low dimensional space and then applied the state-of-the-art machine learning algorithms to predict disease status based on the low-dimensional representations. These deep learning approaches outperformed existing prediction methods, such as prediction based on transcript-wise screening and prediction based on principal component analysis. The limitations of the current deep learning approach and possible improvements were also discussed.
Collapse
Affiliation(s)
- Qianfan Wu
- Questrom School of Business, Boston University, 595 Commonwealth Avenue, Boston, MA, 02215, USA
| | - Adel Boueiz
- Channing Division of Network Medicine, Brigham and Women’s Hospital/Harvard Medical School, 181 Longwood Avenue, Boston MA 02115, USA,Department of Medicine, Pulmonary and Critical Care Division, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Alican Bozkurt
- Department of Computer Science, Northeastern University, Boston, MA, USA
| | - Arya Masoomi
- Department of Computer Science, Northeastern University, Boston, MA, USA
| | | | - Dawn L DeMeo
- Channing Division of Network Medicine, Brigham and Women’s Hospital/Harvard Medical School, 181 Longwood Avenue, Boston MA 02115, USA
| | - Scott T Weiss
- Channing Division of Network Medicine, Brigham and Women’s Hospital/Harvard Medical School, 181 Longwood Avenue, Boston MA 02115, USA
| | - Weiliang Qiu
- Channing Division of Network Medicine, Brigham and Women’s Hospital/Harvard Medical School, 181 Longwood Avenue, Boston MA 02115, USA,Corresponding author: Weiliang Qiu, Channing Division of Network Medicine, Brigham and Women’s Hospital/Harvard Medical School, 181 Longwood Avenue, Boston MA02115, USA, Tel: 6177325500;
| |
Collapse
|
297
|
Lai Z, Deng H. Multiscale High-Level Feature Fusion for Histopathological Image Classification. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2017; 2017:7521846. [PMID: 29463986 PMCID: PMC5804108 DOI: 10.1155/2017/7521846] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Accepted: 12/06/2017] [Indexed: 11/21/2022]
Abstract
Histopathological image classification is one of the most important steps for disease diagnosis. We proposed a method for multiclass histopathological image classification based on deep convolutional neural network referred to as coding network. It can gain better representation for the histopathological image than only using coding network. The main process is that training a deep convolutional neural network is to extract high-level feature and fuse two convolutional layers' high-level feature as multiscale high-level feature. In order to gain better performance and high efficiency, we would employ sparse autoencoder (SAE) and principal components analysis (PCA) to reduce the dimensionality of multiscale high-level feature. We evaluate the proposed method on a real histopathological image dataset. Our results suggest that the proposed method is effective and outperforms the coding network.
Collapse
Affiliation(s)
- ZhiFei Lai
- Department of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China
| | - HuiFang Deng
- Department of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China
| |
Collapse
|
298
|
Alex V, Vaidhya K, Thirunavukkarasu S, Kesavadas C, Krishnamurthi G. Semisupervised learning using denoising autoencoders for brain lesion detection and segmentation. J Med Imaging (Bellingham) 2017; 4:041311. [PMID: 29285516 DOI: 10.1117/1.jmi.4.4.041311] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Accepted: 11/16/2017] [Indexed: 12/13/2022] Open
Abstract
The work explores the use of denoising autoencoders (DAEs) for brain lesion detection, segmentation, and false-positive reduction. Stacked denoising autoencoders (SDAEs) were pretrained using a large number of unlabeled patient volumes and fine-tuned with patches drawn from a limited number of patients ([Formula: see text], 40, 65). The results show negligible loss in performance even when SDAE was fine-tuned using 20 labeled patients. Low grade glioma (LGG) segmentation was achieved using a transfer learning approach in which a network pretrained with high grade glioma data was fine-tuned using LGG image patches. The networks were also shown to generalize well and provide good segmentation on unseen BraTS 2013 and BraTS 2015 test data. The manuscript also includes the use of a single layer DAE, referred to as novelty detector (ND). ND was trained to accurately reconstruct nonlesion patches. The reconstruction error maps of test data were used to localize lesions. The error maps were shown to assign unique error distributions to various constituents of the glioma, enabling localization. The ND learns the nonlesion brain accurately as it was also shown to provide good segmentation performance on ischemic brain lesions in images from a different database.
Collapse
Affiliation(s)
- Varghese Alex
- Indian Institute of Technology Madras, Department of Engineering Design, Chennai, India
| | - Kiran Vaidhya
- Indian Institute of Technology Madras, Department of Engineering Design, Chennai, India
| | | | - Chandrasekharan Kesavadas
- Sree Chitra Tirunal Institute for Medical Sciences and Technology, Department of Radiology, Trivandrum, India
| | | |
Collapse
|
299
|
Ye F. Particle swarm optimization-based automatic parameter selection for deep neural networks and its applications in large-scale and high-dimensional data. PLoS One 2017; 12:e0188746. [PMID: 29236718 PMCID: PMC5728507 DOI: 10.1371/journal.pone.0188746] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2017] [Accepted: 10/02/2017] [Indexed: 01/02/2023] Open
Abstract
In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks.
Collapse
Affiliation(s)
- Fei Ye
- School of information science and technology, Southwest Jiaotong University, ChengDu, China
| |
Collapse
|
300
|
Abstract
In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies.
Collapse
|