101
|
Deep learning in medical image analysis: A third eye for doctors. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2019; 120:279-288. [DOI: 10.1016/j.jormas.2019.06.002] [Citation(s) in RCA: 90] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Revised: 06/11/2019] [Accepted: 06/18/2019] [Indexed: 12/22/2022]
|
102
|
Cunefare D, Huckenpahler AL, Patterson EJ, Dubra A, Carroll J, Farsiu S. RAC-CNN: multimodal deep learning based automatic detection and classification of rod and cone photoreceptors in adaptive optics scanning light ophthalmoscope images. BIOMEDICAL OPTICS EXPRESS 2019; 10:3815-3832. [PMID: 31452977 PMCID: PMC6701534 DOI: 10.1364/boe.10.003815] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 06/26/2019] [Accepted: 06/29/2019] [Indexed: 05/03/2023]
Abstract
Quantification of the human rod and cone photoreceptor mosaic in adaptive optics scanning light ophthalmoscope (AOSLO) images is useful for the study of various retinal pathologies. Subjective and time-consuming manual grading has remained the gold standard for evaluating these images, with no well validated automatic methods for detecting individual rods having been developed. We present a novel deep learning based automatic method, called the rod and cone CNN (RAC-CNN), for detecting and classifying rods and cones in multimodal AOSLO images. We test our method on images from healthy subjects as well as subjects with achromatopsia over a range of retinal eccentricities. We show that our method is on par with human grading for detecting rods and cones.
Collapse
Affiliation(s)
- David Cunefare
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Alison L. Huckenpahler
- Department of Cell Biology, Neurobiology, & Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Emily J. Patterson
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
| | - Joseph Carroll
- Department of Cell Biology, Neurobiology, & Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA
- Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| |
Collapse
|
103
|
Deep learning based computer-aided diagnosis systems for diabetic retinopathy: A survey. Artif Intell Med 2019; 99:101701. [DOI: 10.1016/j.artmed.2019.07.009] [Citation(s) in RCA: 95] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Revised: 07/19/2019] [Accepted: 07/26/2019] [Indexed: 02/06/2023]
|
104
|
Shahid AH, Singh M. Computational intelligence techniques for medical diagnosis and prognosis: Problems and current developments. Biocybern Biomed Eng 2019. [DOI: 10.1016/j.bbe.2019.05.010] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
105
|
Wang R, Chen B, Meng D, Wang L. Weakly Supervised Lesion Detection From Fundus Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1501-1512. [PMID: 30530359 DOI: 10.1109/tmi.2018.2885376] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Early diagnosis and continuous monitoring of patients suffering from eye diseases have been major concerns in the computer-aided detection techniques. Detecting one or several specific types of retinal lesions has made a significant breakthrough in computer-aided screen in the past few decades. However, due to the variety of retinal lesions and complex normal anatomical structures, automatic detection of lesions with unknown and diverse types from a retina remains a challenging task. In this paper, a weakly supervised method, requiring only a series of normal and abnormal retinal images without need to specifically annotate their locations and types, is proposed for this task. Specifically, a fundus image is understood as a superposition of background, blood vessels, and background noise (lesions included for abnormal images). Background is formulated as a low-rank structure after a series of simple preprocessing steps, including spatial alignment, color normalization, and blood vessels removal. Background noise is regarded as stochastic variable and modeled through Gaussian for normal images and mixture of Gaussian for abnormal images, respectively. The proposed method encodes both the background knowledge of fundus images and the background noise into one unique model, and corporately optimizes the model using normal and abnormal images, which fully depict the low-rank subspace of the background and distinguish the lesions from the background noise in abnormal fundus images. Experimental results demonstrate that the proposed method is of fine arts accuracy and outperforms the previous related methods.
Collapse
|
106
|
Eftekhari N, Pourreza HR, Masoudi M, Ghiasi-Shirazi K, Saeedi E. Microaneurysm detection in fundus images using a two-step convolutional neural network. Biomed Eng Online 2019; 18:67. [PMID: 31142335 PMCID: PMC6542103 DOI: 10.1186/s12938-019-0675-9] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Accepted: 04/30/2019] [Indexed: 11/29/2022] Open
Abstract
Background and objectives Diabetic retinopathy (DR) is the leading cause of blindness worldwide, and therefore its early detection is important in order to reduce disease-related eye injuries. DR is diagnosed by inspecting fundus images. Since microaneurysms (MA) are one of the main symptoms of the disease, distinguishing this complication within the fundus images facilitates early DR detection. In this paper, an automatic analysis of retinal images using convolutional neural network (CNN) is presented. Methods Our method incorporates a novel technique utilizing a two-stage process with two online datasets which results in accurate detection while solving the imbalance data problem and decreasing training time in comparison with previous studies. We have implemented our proposed CNNs using the Keras library. Results In order to evaluate our proposed method, an experiment was conducted on two standard publicly available datasets, i.e., Retinopathy Online Challenge dataset and E-Ophtha-MA dataset. Our results demonstrated a promising sensitivity value of about 0.8 for an average of >6 false positives per image, which is competitive with state of the art approaches. Conclusion Our method indicates significant improvement in MA-detection using retinal fundus images for monitoring diabetic retinopathy.
Collapse
Affiliation(s)
- Noushin Eftekhari
- Machine Vision Lab., Computer Engineering Department, Faculty of Engineering, Ferdowsi University of Mashhad (FUM), Azadi Sqr., Mashhad, Iran
| | - Hamid-Reza Pourreza
- Machine Vision Lab., Computer Engineering Department, Faculty of Engineering, Ferdowsi University of Mashhad (FUM), Azadi Sqr., Mashhad, Iran.
| | - Mojtaba Masoudi
- Machine Vision Lab., Computer Engineering Department, Faculty of Engineering, Ferdowsi University of Mashhad (FUM), Azadi Sqr., Mashhad, Iran
| | - Kamaledin Ghiasi-Shirazi
- Machine Vision Lab., Computer Engineering Department, Faculty of Engineering, Ferdowsi University of Mashhad (FUM), Azadi Sqr., Mashhad, Iran
| | - Ehsan Saeedi
- Machine Vision Lab., Computer Engineering Department, Faculty of Engineering, Ferdowsi University of Mashhad (FUM), Azadi Sqr., Mashhad, Iran
| |
Collapse
|
107
|
Noninvasive Evaluation of the Pathologic Grade of Hepatocellular Carcinoma Using MCF-3DCNN: A Pilot Study. BIOMED RESEARCH INTERNATIONAL 2019; 2019:9783106. [PMID: 31183380 PMCID: PMC6512077 DOI: 10.1155/2019/9783106] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Revised: 03/22/2019] [Accepted: 03/27/2019] [Indexed: 12/12/2022]
Abstract
Purpose To evaluate the diagnostic performance of deep learning with a multichannel fusion three-dimensional convolutional neural network (MCF-3DCNN) in the differentiation of the pathologic grades of hepatocellular carcinoma (HCC) based on dynamic contrast-enhanced magnetic resonance images (DCE-MR images). Methods and Materials Fifty-one histologically proven HCCs from 42 consecutive patients from January 2015 to September 2017 were included in this retrospective study. Pathologic examinations revealed nine well-differentiated (WD), 35 moderately differentiated (MD), and seven poorly differentiated (PD) HCCs. DCE-MR images with five phases were collected using a 3.0 Tesla MR scanner. The 4D-tensor representation was employed to organize the collected data in one temporal and three spatial dimensions by referring to the phases and 3D scanning slices of the DCE-MR images. A deep learning diagnosis model with MCF-3DCNN was proposed, and the structure of MCF-3DCNN was determined to approximate clinical diagnosis experience by taking into account the significance of the spatial and temporal information from DCE-MR images. Then, MCF-3DCNN was trained based on well-labeled samples of HCC lesions from real patient cases by experienced radiologists. The accuracy when differentiating the pathologic grades of HCC was calculated, and the performance of MCF-3DCNN in lesion diagnosis was assessed. Additionally, the areas under the receiver operating characteristic curves (AUC) for distinguishing WD, MD, and PD HCCs were calculated. Results MCF-3DCNN achieved an average accuracy of 0.7396±0.0104 with regard to totally differentiating the pathologic grade of HCC. MCF-3DCNN also achieved the highest diagnostic performance for discriminating WD HCCs from others, with an average AUC, accuracy, sensitivity, and specificity of 0.96, 91.00%, 96.88%, and 89.62%, respectively. Conclusions This study indicates that MCF-3DCNN can be a promising technology for evaluating the pathologic grade of HCC based on DCE-MR images.
Collapse
|
108
|
Effective Diagnosis and Treatment through Content-Based Medical Image Retrieval (CBMIR) by Using Artificial Intelligence. J Clin Med 2019; 8:jcm8040462. [PMID: 30959798 PMCID: PMC6518303 DOI: 10.3390/jcm8040462] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Revised: 04/02/2019] [Accepted: 04/03/2019] [Indexed: 02/07/2023] Open
Abstract
Medical-image-based diagnosis is a tedious task‚ and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%).
Collapse
|
109
|
The present and future of deep learning in radiology. Eur J Radiol 2019; 114:14-24. [PMID: 31005165 DOI: 10.1016/j.ejrad.2019.02.038] [Citation(s) in RCA: 181] [Impact Index Per Article: 30.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2018] [Revised: 02/17/2019] [Accepted: 02/26/2019] [Indexed: 12/18/2022]
Abstract
The advent of Deep Learning (DL) is poised to dramatically change the delivery of healthcare in the near future. Not only has DL profoundly affected the healthcare industry it has also influenced global businesses. Within a span of very few years, advances such as self-driving cars, robots performing jobs that are hazardous to human, and chat bots talking with human operators have proved that DL has already made large impact on our lives. The open source nature of DL and decreasing prices of computer hardware will further propel such changes. In healthcare, the potential is immense due to the need to automate the processes and evolve error free paradigms. The sheer quantum of DL publications in healthcare has surpassed other domains growing at a very fast pace, particular in radiology. It is therefore imperative for the radiologists to learn about DL and how it differs from other approaches of Artificial Intelligence (AI). The next generation of radiology will see a significant role of DL and will likely serve as the base for augmented radiology (AR). Better clinical judgement by AR will help in improving the quality of life and help in life saving decisions, while lowering healthcare costs. A comprehensive review of DL as well as its implications upon the healthcare is presented in this review. We had analysed 150 articles of DL in healthcare domain from PubMed, Google Scholar, and IEEE EXPLORE focused in medical imagery only. We have further examined the ethic, moral and legal issues surrounding the use of DL in medical imaging.
Collapse
|
110
|
Girard F, Kavalec C, Cheriet F. Joint segmentation and classification of retinal arteries/veins from fundus images. Artif Intell Med 2019; 94:96-109. [DOI: 10.1016/j.artmed.2019.02.004] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2017] [Revised: 08/09/2018] [Accepted: 02/17/2019] [Indexed: 11/17/2022]
|
111
|
Gómez-Valverde JJ, Antón A, Fatti G, Liefers B, Herranz A, Santos A, Sánchez CI, Ledesma-Carbayo MJ. Automatic glaucoma classification using color fundus images based on convolutional neural networks and transfer learning. BIOMEDICAL OPTICS EXPRESS 2019; 10:892-913. [PMID: 30800522 PMCID: PMC6377910 DOI: 10.1364/boe.10.000892] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 12/23/2018] [Accepted: 12/23/2018] [Indexed: 06/01/2023]
Abstract
Glaucoma detection in color fundus images is a challenging task that requires expertise and years of practice. In this study we exploited the application of different Convolutional Neural Networks (CNN) schemes to show the influence in the performance of relevant factors like the data set size, the architecture and the use of transfer learning vs newly defined architectures. We also compared the performance of the CNN based system with respect to human evaluators and explored the influence of the integration of images and data collected from the clinical history of the patients. We accomplished the best performance using a transfer learning scheme with VGG19 achieving an AUC of 0.94 with sensitivity and specificity ratios similar to the expert evaluators of the study. The experimental results using three different data sets with 2313 images indicate that this solution can be a valuable option for the design of a computer aid system for the detection of glaucoma in large-scale screening programs.
Collapse
Affiliation(s)
- Juan J Gómez-Valverde
- Biomedical Image Technologies Laboratory (BIT), ETSI Telecomunicación, Universidad Politécnica de Madrid, 28040 Madrid, Spain
- Biomedical Research Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Spain
| | - Alfonso Antón
- Parc de Salut Mar, Barcelona, Spain
- Universitat Internacional de Catalunya, Barcelona, Spain
- Institut Catala de Retina, Barcelona, Spain
| | | | - Bart Liefers
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Andrés Santos
- Biomedical Image Technologies Laboratory (BIT), ETSI Telecomunicación, Universidad Politécnica de Madrid, 28040 Madrid, Spain
- Biomedical Research Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Spain
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - María J Ledesma-Carbayo
- Biomedical Image Technologies Laboratory (BIT), ETSI Telecomunicación, Universidad Politécnica de Madrid, 28040 Madrid, Spain
- Biomedical Research Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Spain
| |
Collapse
|
112
|
Artificial intelligence-assisted interpretation of bone age radiographs improves accuracy and decreases variability. Skeletal Radiol 2019; 48:275-283. [PMID: 30069585 DOI: 10.1007/s00256-018-3033-2] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/12/2018] [Revised: 07/10/2018] [Accepted: 07/20/2018] [Indexed: 02/02/2023]
Abstract
OBJECTIVE Radiographic bone age assessment (BAA) is used in the evaluation of pediatric endocrine and metabolic disorders. We previously developed an automated artificial intelligence (AI) deep learning algorithm to perform BAA using convolutional neural networks. We compared the BAA performance of a cohort of pediatric radiologists with and without AI assistance. MATERIALS AND METHODS Six board-certified, subspecialty trained pediatric radiologists interpreted 280 age- and gender-matched bone age radiographs ranging from 5 to 18 years. Three of those radiologists then performed BAA with AI assistance. Bone age accuracy and root mean squared error (RMSE) were used as measures of accuracy. Intraclass correlation coefficient evaluated inter-rater variation. RESULTS AI BAA accuracy was 68.2% overall and 98.6% within 1 year, and the mean six-reader cohort accuracy was 63.6 and 97.4% within 1 year. AI RMSE was 0.601 years, while mean single-reader RMSE was 0.661 years. Pooled RMSE decreased from 0.661 to 0.508 years, all individually decreasing with AI assistance. ICC without AI was 0.9914 and with AI was 0.9951. CONCLUSIONS AI improves radiologist's bone age assessment by increasing accuracy and decreasing variability and RMSE. The utilization of AI by radiologists improves performance compared to AI alone, a radiologist alone, or a pooled cohort of experts. This suggests that AI may optimally be utilized as an adjunct to radiologist interpretation of imaging studies to improve performance.
Collapse
|
113
|
Versatile Framework for Medical Image Processing and Analysis with Application to Automatic Bone Age Assessment. JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING 2018. [DOI: 10.1155/2018/2187247] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Deep learning technique has made a tremendous impact on medical image processing and analysis. Typically, the procedure of medical image processing and analysis via deep learning technique includes image segmentation, image enhancement, and classification or regression. A challenge for supervised deep learning frequently mentioned is the lack of annotated training data. In this paper, we aim to address the problems of training transferred deep neural networks with limited amount of annotated data. We proposed a versatile framework for medical image processing and analysis via deep active learning technique. The framework includes (1) applying deep active learning approach to segment specific regions of interest (RoIs) from raw medical image by using annotated data as few as possible; (2) generative adversarial Network is employed to enhance contrast, sharpness, and brightness of segmented RoIs; (3) Paced Transfer Learning (PTL) strategy which means fine-tuning layers in deep neural networks from top to bottom step by step to perform medical image classification or regression tasks. In addition, in order to understand the necessity of deep-learning-based medical image processing tasks and provide clues for clinical usage, class active map (CAM) is employed in our framework to visualize the feature maps. To illustrate the effectiveness of the proposed framework, we apply our framework to the bone age assessment (BAA) task using RSNA dataset and achieve the state-of-the-art performance. Experimental results indicate that the proposed framework can be effectively applied to medical image analysis task.
Collapse
|
114
|
Abstract
Automated medical image analysis is an emerging field of research that identifies the disease with the help of imaging technology. Diabetic retinopathy (DR) is a retinal disease that is diagnosed in diabetic patients. Deep neural network (DNN) is widely used to classify diabetic retinopathy from fundus images collected from suspected persons. The proposed DR classification system achieves a symmetrically optimized solution through the combination of a Gaussian mixture model (GMM), visual geometry group network (VGGNet), singular value decomposition (SVD) and principle component analysis (PCA), and softmax, for region segmentation, high dimensional feature extraction, feature selection and fundus image classification, respectively. The experiments were performed using a standard KAGGLE dataset containing 35,126 images. The proposed VGG-19 DNN based DR model outperformed the AlexNet and spatial invariant feature transform (SIFT) in terms of classification accuracy and computational time. Utilization of PCA and SVD feature selection with fully connected (FC) layers demonstrated the classification accuracies of 92.21%, 98.34%, 97.96%, and 98.13% for FC7-PCA, FC7-SVD, FC8-PCA, and FC8-SVD, respectively.
Collapse
|
115
|
Napel S, Mu W, Jardim‐Perassi BV, Aerts HJWL, Gillies RJ. Quantitative imaging of cancer in the postgenomic era: Radio(geno)mics, deep learning, and habitats. Cancer 2018; 124:4633-4649. [PMID: 30383900 PMCID: PMC6482447 DOI: 10.1002/cncr.31630] [Citation(s) in RCA: 138] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 07/11/2018] [Accepted: 07/17/2018] [Indexed: 11/07/2022]
Abstract
Although cancer often is referred to as "a disease of the genes," it is indisputable that the (epi)genetic properties of individual cancer cells are highly variable, even within the same tumor. Hence, preexisting resistant clones will emerge and proliferate after therapeutic selection that targets sensitive clones. Herein, the authors propose that quantitative image analytics, known as "radiomics," can be used to quantify and characterize this heterogeneity. Virtually every patient with cancer is imaged radiologically. Radiomics is predicated on the beliefs that these images reflect underlying pathophysiologies, and that they can be converted into mineable data for improved diagnosis, prognosis, prediction, and therapy monitoring. In the last decade, the radiomics of cancer has grown from a few laboratories to a worldwide enterprise. During this growth, radiomics has established a convention, wherein a large set of annotated image features (1-2000 features) are extracted from segmented regions of interest and used to build classifier models to separate individual patients into their appropriate class (eg, indolent vs aggressive disease). An extension of this conventional radiomics is the application of "deep learning," wherein convolutional neural networks can be used to detect the most informative regions and features without human intervention. A further extension of radiomics involves automatically segmenting informative subregions ("habitats") within tumors, which can be linked to underlying tumor pathophysiology. The goal of the radiomics enterprise is to provide informed decision support for the practice of precision oncology.
Collapse
Affiliation(s)
- Sandy Napel
- Department of RadiologyStanford UniversityStanfordCalifornia
| | - Wei Mu
- Department of Cancer PhysiologyH. Lee Moffitt Cancer CenterTampaFlorida
| | | | - Hugo J. W. L. Aerts
- Dana‐Farber Cancer Institute, Department of Radiology, Brigham and Women’s HospitalHarvard Medical SchoolBostonMassachusetts
| | - Robert J. Gillies
- Department of Cancer PhysiologyH. Lee Moffitt Cancer CenterTampaFlorida
| |
Collapse
|
116
|
Abstract
The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction, and intervention. Deep learning is a representation learning method that consists of layers that transform data nonlinearly, thus, revealing hierarchical relationships and structures. In this review, we survey deep learning application papers that use structured data, and signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.
Collapse
|
117
|
Xiang D, Tian H, Yang X, Shi F, Zhu W, Chen H, Chen X. Automatic Segmentation of Retinal Layer in OCT Images With Choroidal Neovascularization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:5880-5891. [PMID: 30059302 DOI: 10.1109/tip.2018.2860255] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Age-related macular degeneration is one of the main causes of blindness. However, the internal structures of retinas are complex and difficult to be recognized due to the occurrence of neovascularization. Traditional surface detection methods may fail in the layer segmentation. In this paper, a supervised method is reported for simultaneously segmenting layers and neovascularization. Three spatial features, seven gray-level-based features, and 14 layer-like features are extracted for the neural network classifier. The coarse surfaces of different optical coherence tomography (OCT) images can thus be found. To describe and enhance retinal layers with different thicknesses and abnormalities, multi-scale bright and dark layer detection filters are introduced. A constrained graph search algorithm is also proposed to accurately detect retinal surfaces. The weights of nodes in the graph are computed based on these layer-like responses. The proposed method was evaluated on 42 spectral-domain OCT images with age-related macular degeneration. The experimental results show that the proposed method outperforms state-of-the-art methods.
Collapse
|
118
|
Amin J, Sharif M, Rehman A, Raza M, Mufti MR. Diabetic retinopathy detection and classification using hybrid feature set. Microsc Res Tech 2018; 81:990-996. [DOI: 10.1002/jemt.23063] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2018] [Revised: 04/25/2018] [Accepted: 05/15/2018] [Indexed: 12/25/2022]
Affiliation(s)
- Javeria Amin
- Department of Computer ScienceUniversity of WahPakistan
| | - Muhammad Sharif
- Department of Computer ScienceCOMSATS University Islamabad Wah Campus Pakistan
| | - Amjad Rehman
- College of Computer and Information Systems, Al‐Yamamah University Riyadh 11512 Saudi Arabia
| | - Mudassar Raza
- Department of Computer ScienceCOMSATS University Islamabad Wah Campus Pakistan
| | - Muhammad Rafiq Mufti
- Department of Computer ScienceCOMSATS Institute of Information Technology Vehari Pakistan
| |
Collapse
|
119
|
Khojasteh P, Aliahmad B, Kumar DK. Fundus images analysis using deep features for detection of exudates, hemorrhages and microaneurysms. BMC Ophthalmol 2018; 18:288. [PMID: 30400869 PMCID: PMC6219077 DOI: 10.1186/s12886-018-0954-4] [Citation(s) in RCA: 67] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Accepted: 10/23/2018] [Indexed: 12/29/2022] Open
Abstract
Background Convolution neural networks have been considered for automatic analysis of fundus images to detect signs of diabetic retinopathy but suffer from low sensitivity. Methods This study has proposed an alternate method using probabilistic output from Convolution neural network to automatically and simultaneously detect exudates, hemorrhages and microaneurysms. The method was evaluated using two approaches: patch and image-based analysis of the fundus images on two public databases: DIARETDB1 and e-Ophtha. The novelty of the proposed method is that the images were analyzed using probability maps generated by score values of the softmax layer instead of the use of the binary output. Results The sensitivity of the proposed approach was 0.96, 0.84 and 0.85 for detection of exudates, hemorrhages and microaneurysms, respectively when considering patch-based analysis. The results show overall accuracy for DIARETDB1 was 97.3% and 86.6% for e-Ophtha. The error rate for image-based analysis was also significantly reduced when compared with other works. Conclusion The proposed method provides the framework for convolution neural network-based analysis of fundus images to identify exudates, hemorrhages, and microaneurysms. It obtained accuracy and sensitivity which were significantly better than the reported studies and makes it suitable for automatic diabetic retinopathy signs detection.
Collapse
Affiliation(s)
- Parham Khojasteh
- Biosignal Lab, School of Engineering, RMIT University, Melbourne, Australia
| | - Behzad Aliahmad
- Biosignal Lab, School of Engineering, RMIT University, Melbourne, Australia
| | - Dinesh K Kumar
- Biosignal Lab, School of Engineering, RMIT University, Melbourne, Australia.
| |
Collapse
|
120
|
Biyani R, Patre B. Algorithms for red lesion detection in Diabetic Retinopathy: A review. Biomed Pharmacother 2018; 107:681-688. [DOI: 10.1016/j.biopha.2018.07.175] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Revised: 07/31/2018] [Accepted: 07/31/2018] [Indexed: 11/27/2022] Open
|
121
|
Iakovidis DK, Georgakopoulos SV, Vasilakakis M, Koulaouzidis A, Plagianakos VP. Detecting and Locating Gastrointestinal Anomalies Using Deep Learning and Iterative Cluster Unification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2196-2210. [PMID: 29994763 DOI: 10.1109/tmi.2018.2837002] [Citation(s) in RCA: 81] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
This paper proposes a novel methodology for automatic detection and localization of gastrointestinal (GI) anomalies in endoscopic video frame sequences. Training is performed with weakly annotated images, using only image-level, semantic labels instead of detailed, and pixel-level annotations. This makes it a cost-effective approach for the analysis of large videoendoscopy repositories. Other advantages of the proposed methodology include its capability to suggest possible locations of GI anomalies within the video frames, and its generality, in the sense that abnormal frame detection is based on automatically derived image features. It is implemented in three phases: 1) it classifies the video frames into abnormal or normal using a weakly supervised convolutional neural network (WCNN) architecture; 2) detects salient points from deeper WCNN layers, using a deep saliency detection algorithm; and 3) localizes GI anomalies using an iterative cluster unification (ICU) algorithm. ICU is based on a pointwise cross-feature-map (PCFM) descriptor extracted locally from the detected salient points using information derived from the WCNN. Results, from extensive experimentation using publicly available collections of gastrointestinal endoscopy video frames, are presented. The data sets used include a variety of GI anomalies. Both anomaly detection and localization performance achieved, in terms of the area under receiver operating characteristic (AUC), were >80%. The highest AUC for anomaly detection was obtained on conventional gastroscopy images, reaching 96%, and the highest AUC for anomaly localization was obtained on wireless capsule endoscopy images, reaching 88%.
Collapse
|
122
|
Xing F, Xie Y, Su H, Liu F, Yang L. Deep Learning in Microscopy Image Analysis: A Survey. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:4550-4568. [PMID: 29989994 DOI: 10.1109/tnnls.2017.2766168] [Citation(s) in RCA: 168] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Computerized microscopy image analysis plays an important role in computer aided diagnosis and prognosis. Machine learning techniques have powered many aspects of medical investigation and clinical practice. Recently, deep learning is emerging as a leading machine learning tool in computer vision and has attracted considerable attention in biomedical image analysis. In this paper, we provide a snapshot of this fast-growing field, specifically for microscopy image analysis. We briefly introduce the popular deep neural networks and summarize current deep learning achievements in various tasks, such as detection, segmentation, and classification in microscopy image analysis. In particular, we explain the architectures and the principles of convolutional neural networks, fully convolutional networks, recurrent neural networks, stacked autoencoders, and deep belief networks, and interpret their formulations or modelings for specific tasks on various microscopy images. In addition, we discuss the open challenges and the potential trends of future research in microscopy image analysis using deep learning.
Collapse
|
123
|
Screening und Management retinaler Erkrankungen mittels digitaler Medizin. Ophthalmologe 2018; 115:728-736. [DOI: 10.1007/s00347-018-0752-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
124
|
Fan S, Xu L, Fan Y, Wei K, Li L. Computer-aided detection of small intestinal ulcer and erosion in wireless capsule endoscopy images. Phys Med Biol 2018; 63:165001. [PMID: 30033931 DOI: 10.1088/1361-6560/aad51c] [Citation(s) in RCA: 85] [Impact Index Per Article: 12.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
A novel computer-aided detection method based on deep learning framework was proposed to detect small intestinal ulcer and erosion in wireless capsule endoscopy (WCE) images. To the best of our knowledge, this is the first time that deep learning framework has been exploited on automated ulcer and erosion detection in WCE images. Compared with the traditional detection method, deep learning framework can produce image features directly from the data and increase recognition accuracy as well as efficiency, especially for big data. The developed method included image cropping and image compression. The AlexNet convolutional neural network was trained to the database with tens of thousands of WCE images to differentiate lesion and normal tissue. The results of ulcer and erosion detection reached a high accuracy of 95.16% and 95.34%, sensitivity of 96.80% and 93.67%, and specificity of 94.79% and 95.98%, correspondingly. The area under the receiver operating characteristic curve was over 0.98 in both of the networks. The promising results indicate that the proposed method has the potential to work in tandem with doctors to efficiently detect intestinal ulcer and erosion.
Collapse
Affiliation(s)
- Shanhui Fan
- College of Life Information Science and Instrument Engineering, Hangzhou Dianzi University, Hangzhou 310018, People's Republic of China
| | | | | | | | | |
Collapse
|
125
|
Cunefare D, Langlo CS, Patterson EJ, Blau S, Dubra A, Carroll J, Farsiu S. Deep learning based detection of cone photoreceptors with multimodal adaptive optics scanning light ophthalmoscope images of achromatopsia. BIOMEDICAL OPTICS EXPRESS 2018; 9:3740-3756. [PMID: 30338152 PMCID: PMC6191607 DOI: 10.1364/boe.9.003740] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2018] [Revised: 07/15/2018] [Accepted: 07/15/2018] [Indexed: 05/18/2023]
Abstract
Fast and reliable quantification of cone photoreceptors is a bottleneck in the clinical utilization of adaptive optics scanning light ophthalmoscope (AOSLO) systems for the study, diagnosis, and prognosis of retinal diseases. To-date, manual grading has been the sole reliable source of AOSLO quantification, as no automatic method has been reliably utilized for cone detection in real-world low-quality images of diseased retina. We present a novel deep learning based approach that combines information from both the confocal and non-confocal split detector AOSLO modalities to detect cones in subjects with achromatopsia. Our dual-mode deep learning based approach outperforms the state-of-the-art automated techniques and is on a par with human grading.
Collapse
Affiliation(s)
- David Cunefare
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Christopher S. Langlo
- Department of Cell Biology, Neurobiology, and Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Emily J. Patterson
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Sarah Blau
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
| | - Joseph Carroll
- Department of Cell Biology, Neurobiology, and Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| |
Collapse
|
126
|
Schmidt-Erfurth U, Sadeghipour A, Gerendas BS, Waldstein SM, Bogunović H. Artificial intelligence in retina. Prog Retin Eye Res 2018; 67:1-29. [PMID: 30076935 DOI: 10.1016/j.preteyeres.2018.07.004] [Citation(s) in RCA: 410] [Impact Index Per Article: 58.6] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 07/24/2018] [Accepted: 07/31/2018] [Indexed: 02/08/2023]
Abstract
Major advances in diagnostic technologies are offering unprecedented insight into the condition of the retina and beyond ocular disease. Digital images providing millions of morphological datasets can fast and non-invasively be analyzed in a comprehensive manner using artificial intelligence (AI). Methods based on machine learning (ML) and particularly deep learning (DL) are able to identify, localize and quantify pathological features in almost every macular and retinal disease. Convolutional neural networks thereby mimic the path of the human brain for object recognition through learning of pathological features from training sets, supervised ML, or even extrapolation from patterns recognized independently, unsupervised ML. The methods of AI-based retinal analyses are diverse and differ widely in their applicability, interpretability and reliability in different datasets and diseases. Fully automated AI-based systems have recently been approved for screening of diabetic retinopathy (DR). The overall potential of ML/DL includes screening, diagnostic grading as well as guidance of therapy with automated detection of disease activity, recurrences, quantification of therapeutic effects and identification of relevant targets for novel therapeutic approaches. Prediction and prognostic conclusions further expand the potential benefit of AI in retina which will enable personalized health care as well as large scale management and will empower the ophthalmologist to provide high quality diagnosis/therapy and successfully deal with the complexity of 21st century ophthalmology.
Collapse
Affiliation(s)
- Ursula Schmidt-Erfurth
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria.
| | - Amir Sadeghipour
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Bianca S Gerendas
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Sebastian M Waldstein
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| | - Hrvoje Bogunović
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Spitalgasse 23, 1090, Vienna, Austria
| |
Collapse
|
127
|
Lam C, Yu C, Huang L, Rubin D. Retinal Lesion Detection With Deep Learning Using Image Patches. Invest Ophthalmol Vis Sci 2018; 59:590-596. [PMID: 29372258 PMCID: PMC5788045 DOI: 10.1167/iovs.17-22721] [Citation(s) in RCA: 71] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
Purpose To develop an automated method of localizing and discerning multiple types of findings in retinal images using a limited set of training data without hard-coded feature extraction as a step toward generalizing these methods to rare disease detection in which a limited number of training data are available. Methods Two ophthalmologists verified 243 retinal images, labeling important subsections of the image to generate 1324 image patches containing either hemorrhages, microaneurysms, exudates, retinal neovascularization, or normal-appearing structures from the Kaggle dataset. These image patches were used to train one standard convolutional neural network to predict the presence of these five classes. A sliding window method was used to generate probability maps across the entire image. Results The method was validated on the eOphta dataset of 148 whole retinal images for microaneurysms and 47 for exudates. A pixel-wise classification of the area under the curve of the receiver operating characteristic of 0.94 and 0.95, as well as a lesion-wise area under the precision recall curve of 0.86 and 0.64, was achieved for microaneurysms and exudates, respectively. Conclusions Regionally trained convolutional neural networks can generate lesion-specific probability maps able to detect and distinguish between subtle pathologic lesions with only a few hundred training examples per lesion.
Collapse
Affiliation(s)
- Carson Lam
- Department of Biomedical Data Science, Stanford University, Stanford, California, United States.,Department of Ophthalmology, Santa Clara Valley Medical Center, San Jose, California, United States
| | - Caroline Yu
- Stanford University School of Medicine, Stanford, California, United States
| | - Laura Huang
- Department of Ophthalmology, Stanford University School of Medicine, Stanford, California, United States
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, California, United States.,Department of Ophthalmology, Stanford University School of Medicine, Stanford, California, United States.,Department of Radiology, Stanford University School of Medicine, Stanford, California, United States
| |
Collapse
|
128
|
Khojasteh P, Aliahmad B, Arjunan SP, Kumar DK. Introducing a Novel Layer in Convolutional Neural Network for Automatic Identification of Diabetic Retinopathy. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:5938-5941. [PMID: 30441688 DOI: 10.1109/embc.2018.8513606] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Convolutional neural networks have been widely used for identifying diabetic retinopathy on color fundus images. For such application, we proposed a novel framework for the convolutional neural network architecture by embedding a preprocessing layer followed by the first convolutional layer to increase the performance of the convolutional neural network classifier. Two image enhancement techniques i.e. 1- Contrast Enhancement 2- Contrast-limited adaptive histogram equalization were separately embedded in the proposed layer and the results were compared. For identification of exudates, hemorrhages and microaneurysms, the proposed framework achieved the total accuracy of 87.6%, and 83.9% for the contrast enhancement and contrast-limited adaptive histogram equalization layers, respectively. However, the total accuracy of the convolutional neural network alone without the prreprocessing layer was found to be 81.4%. Consequently, the new convolutional neural network architecture with the proposed preprocessing layer improved the performance of convolutional neural network.
Collapse
|
129
|
|
130
|
Grewal PS, Oloumi F, Rubin U, Tennant MTS. Deep learning in ophthalmology: a review. Can J Ophthalmol 2018; 53:309-313. [PMID: 30119782 DOI: 10.1016/j.jcjo.2018.04.019] [Citation(s) in RCA: 54] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2018] [Revised: 04/22/2018] [Accepted: 04/25/2018] [Indexed: 01/22/2023]
Abstract
Deep learning is an emerging technology with numerous potential applications in Ophthalmology. Deep learning tools have been applied to different diagnostic modalities including digital photographs, optical coherence tomography, and visual fields. These tools have demonstrated utility in assessment of various disease processes including cataracts, glaucoma, age-related macular degeneration, and diabetic retinopathy. Deep learning techniques are evolving rapidly, and will become more integrated into ophthalmic care. This article reviews the current evidence for deep learning in ophthalmology, and discusses future applications, as well as potential drawbacks.
Collapse
Affiliation(s)
- Parampal S Grewal
- Department of Ophthalmology and Visual Sciences, University of Alberta, Edmonton, Alta
| | | | - Uriel Rubin
- Department of Ophthalmology and Visual Sciences, University of Alberta, Edmonton, Alta
| | - Matthew T S Tennant
- Department of Ophthalmology and Visual Sciences, University of Alberta, Edmonton, Alta..
| |
Collapse
|
131
|
Khomri B, Christodoulidis A, Djerou L, Babahenini MC, Cheriet F. Particle swarm optimization method for small retinal vessels detection on multiresolution fundus images. JOURNAL OF BIOMEDICAL OPTICS 2018; 23:1-13. [PMID: 29749141 DOI: 10.1117/1.jbo.23.5.056004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Accepted: 04/10/2018] [Indexed: 06/08/2023]
Abstract
Retinal vessel segmentation plays an important role in the diagnosis of eye diseases and is considered as one of the most challenging tasks in computer-aided diagnosis (CAD) systems. The main goal of this study was to propose a method for blood-vessel segmentation that could deal with the problem of detecting vessels of varying diameters in high- and low-resolution fundus images. We proposed to use the particle swarm optimization (PSO) algorithm to improve the multiscale line detection (MSLD) method. The PSO algorithm was applied to find the best arrangement of scales in the MSLD method and to handle the problem of multiscale response recombination. The performance of the proposed method was evaluated on two low-resolution (DRIVE and STARE) and one high-resolution fundus (HRF) image datasets. The data include healthy (H) and diabetic retinopathy (DR) cases. The proposed approach improved the sensitivity rate against the MSLD by 4.7% for the DRIVE dataset and by 1.8% for the STARE dataset. For the high-resolution dataset, the proposed approach achieved 87.09% sensitivity rate, whereas the MSLD method achieves 82.58% sensitivity rate at the same specificity level. When only the smallest vessels were considered, the proposed approach improved the sensitivity rate by 11.02% and by 4.42% for the healthy and the diabetic cases, respectively. Integrating the proposed method in a comprehensive CAD system for DR screening would allow the reduction of false positives due to missed small vessels, misclassified as red lesions.
Collapse
Affiliation(s)
- Bilal Khomri
- Univ. de Biskra, Algeria
- Ecole Polytechnique de Montréal, Canada
| | | | | | | | | |
Collapse
|
132
|
He JY, Wu X, Jiang YG, Peng Q, Jain R. Hookworm Detection in Wireless Capsule Endoscopy Images With Deep Learning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:2379-2392. [PMID: 29470172 DOI: 10.1109/tip.2018.2801119] [Citation(s) in RCA: 74] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
As one of the most common human helminths, hookworm is a leading cause of maternal and child morbidity, which seriously threatens human health. Recently, wireless capsule endoscopy (WCE) has been applied to automatic hookworm detection. Unfortunately, it remains a challenging task. In recent years, deep convolutional neural network (CNN) has demonstrated impressive performance in various image and video analysis tasks. In this paper, a novel deep hookworm detection framework is proposed for WCE images, which simultaneously models visual appearances and tubular patterns of hookworms. This is the first deep learning framework specifically designed for hookworm detection in WCE images. Two CNN networks, namely edge extraction network and hookworm classification network, are seamlessly integrated in the proposed framework, which avoid the edge feature caching and speed up the classification. Two edge pooling layers are introduced to integrate the tubular regions induced from edge extraction network and the feature maps from hookworm classification network, leading to enhanced feature maps emphasizing the tubular regions. Experiments have been conducted on one of the largest WCE datasets with WCE images, which demonstrate the effectiveness of the proposed hookworm detection framework. It significantly outperforms the state-of-the-art approaches. The high sensitivity and accuracy of the proposed method in detecting hookworms shows its potential for clinical application.
Collapse
|
133
|
Humpire-Mamani GE, Setio AAA, van Ginneken B, Jacobs C. Efficient organ localization using multi-label convolutional neural networks in thorax-abdomen CT scans. Phys Med Biol 2018; 63:085003. [PMID: 29512516 DOI: 10.1088/1361-6560/aab4b3] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Automatic localization of organs and other structures in medical images is an important preprocessing step that can improve and speed up other algorithms such as organ segmentation, lesion detection, and registration. This work presents an efficient method for simultaneous localization of multiple structures in 3D thorax-abdomen CT scans. Our approach predicts the location of multiple structures using a single multi-label convolutional neural network for each orthogonal view. Each network takes extra slices around the current slice as input to provide extra context. A sigmoid layer is used to perform multi-label classification. The output of the three networks is subsequently combined to compute a 3D bounding box for each structure. We used our approach to locate 11 structures of interest. The neural network was trained and evaluated on a large set of 1884 thorax-abdomen CT scans from patients undergoing oncological workup. Reference bounding boxes were annotated by human observers. The performance of our method was evaluated by computing the wall distance to the reference bounding boxes. The bounding boxes annotated by the first human observer were used as the reference standard for the test set. Using the best configuration, we obtained an average wall distance of [Formula: see text] mm in the test set. The second human observer achieved [Formula: see text] mm. For all structures, the results were better than those reported in previously published studies. In conclusion, we proposed an efficient method for the accurate localization of multiple organs. Our method uses multiple slices as input to provide more context around the slice under analysis, and we have shown that this improves performance. This method can easily be adapted to handle more organs.
Collapse
Affiliation(s)
- Gabriel Efrain Humpire-Mamani
- Department of Radiology and Nuclear Medicine, Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, Netherlands
| | | | | | | |
Collapse
|
134
|
Abstract
Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.
Collapse
Affiliation(s)
| | | | | | - Timothy Kline
- Mayo Clinic, 200 First St SW, Rochester, MN, 55905, USA
| | | |
Collapse
|
135
|
Intelligent Image Processing System for Detection and Segmentation of Regions of Interest in Retinal Images. Symmetry (Basel) 2018. [DOI: 10.3390/sym10030073] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
136
|
Cao C, Liu F, Tan H, Song D, Shu W, Li W, Zhou Y, Bo X, Xie Z. Deep Learning and Its Applications in Biomedicine. GENOMICS, PROTEOMICS & BIOINFORMATICS 2018; 16:17-32. [PMID: 29522900 PMCID: PMC6000200 DOI: 10.1016/j.gpb.2017.07.003] [Citation(s) in RCA: 259] [Impact Index Per Article: 37.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/18/2017] [Revised: 06/18/2017] [Accepted: 07/05/2017] [Indexed: 12/19/2022]
Abstract
Advances in biological and medical technologies have been providing us explosive volumes of biological and physiological data, such as medical images, electroencephalography, genomic and protein sequences. Learning from these data facilitates the understanding of human health and disease. Developed from artificial neural networks, deep learning-based algorithms show great promise in extracting features and learning patterns from complex data. The aim of this paper is to provide an overview of deep learning techniques and some of the state-of-the-art applications in the biomedical field. We first introduce the development of artificial neural network and deep learning. We then describe two main components of deep learning, i.e., deep learning architectures and model optimization. Subsequently, some examples are demonstrated for deep learning applications, including medical image classification, genomic sequence analysis, as well as protein structure classification and prediction. Finally, we offer our perspectives for the future directions in the field of deep learning.
Collapse
Affiliation(s)
- Chensi Cao
- CapitalBio Corporation, Beijing 102206, China
| | - Feng Liu
- Department of Biotechnology, Beijing Institute of Radiation Medicine, Beijing 100850, China
| | - Hai Tan
- State Key Lab of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 500040, China
| | - Deshou Song
- State Key Lab of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 500040, China
| | - Wenjie Shu
- Department of Biotechnology, Beijing Institute of Radiation Medicine, Beijing 100850, China
| | - Weizhong Li
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 500040, China
| | - Yiming Zhou
- CapitalBio Corporation, Beijing 102206, China; Department of Biomedical Engineering, Medical Systems Biology Research Center, Tsinghua University School of Medicine, Beijing 100084, China.
| | - Xiaochen Bo
- Department of Biotechnology, Beijing Institute of Radiation Medicine, Beijing 100850, China.
| | - Zhi Xie
- State Key Lab of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 500040, China.
| |
Collapse
|
137
|
Orlando JI, Prokofyeva E, Del Fresno M, Blaschko MB. An ensemble deep learning based approach for red lesion detection in fundus images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 153:115-127. [PMID: 29157445 DOI: 10.1016/j.cmpb.2017.10.017] [Citation(s) in RCA: 83] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2017] [Revised: 09/06/2017] [Accepted: 10/12/2017] [Indexed: 05/23/2023]
Abstract
BACKGROUND AND OBJECTIVES Diabetic retinopathy (DR) is one of the leading causes of preventable blindness in the world. Its earliest sign are red lesions, a general term that groups both microaneurysms (MAs) and hemorrhages (HEs). In daily clinical practice, these lesions are manually detected by physicians using fundus photographs. However, this task is tedious and time consuming, and requires an intensive effort due to the small size of the lesions and their lack of contrast. Computer-assisted diagnosis of DR based on red lesion detection is being actively explored due to its improvement effects both in clinicians consistency and accuracy. Moreover, it provides comprehensive feedback that is easy to assess by the physicians. Several methods for detecting red lesions have been proposed in the literature, most of them based on characterizing lesion candidates using hand crafted features, and classifying them into true or false positive detections. Deep learning based approaches, by contrast, are scarce in this domain due to the high expense of annotating the lesions manually. METHODS In this paper we propose a novel method for red lesion detection based on combining both deep learned and domain knowledge. Features learned by a convolutional neural network (CNN) are augmented by incorporating hand crafted features. Such ensemble vector of descriptors is used afterwards to identify true lesion candidates using a Random Forest classifier. RESULTS We empirically observed that combining both sources of information significantly improve results with respect to using each approach separately. Furthermore, our method reported the highest performance on a per-lesion basis on DIARETDB1 and e-ophtha, and for screening and need for referral on MESSIDOR compared to a second human expert. CONCLUSIONS Results highlight the fact that integrating manually engineered approaches with deep learned features is relevant to improve results when the networks are trained from lesion-level annotated data. An open source implementation of our system is publicly available at https://github.com/ignaciorlando/red-lesion-detection.
Collapse
Affiliation(s)
- José Ignacio Orlando
- Pladema Institute, UNCPBA, Gral. Pinto 399, Tandil, Argentina; Consejo Nacional de Investigaciones Científicas y Técnicas, CONICET, Argentina.
| | - Elena Prokofyeva
- Scientific Institute of Public Health (WIV-ISP), Brussels, Belgium; Federal Agency for Medicines and Health Products (FAMHP), Brussels, Belgium
| | - Mariana Del Fresno
- Pladema Institute, UNCPBA, Gral. Pinto 399, Tandil, Argentina; Comisión de Investigaciones Científicas de la Provincia de Buenos Aires, CIC-PBA, Buenos Aires, Argentina
| | | |
Collapse
|
138
|
Leveraging uncertainty information from deep neural networks for disease detection. Sci Rep 2017; 7:17816. [PMID: 29259224 PMCID: PMC5736701 DOI: 10.1038/s41598-017-17876-z] [Citation(s) in RCA: 144] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Accepted: 12/01/2017] [Indexed: 12/19/2022] Open
Abstract
Deep learning (DL) has revolutionized the field of computer vision and image processing. In medical imaging, algorithmic solutions based on DL have been shown to achieve high performance on tasks that previously required medical experts. However, DL-based solutions for disease detection have been proposed without methods to quantify and control their uncertainty in a decision. In contrast, a physician knows whether she is uncertain about a case and will consult more experienced colleagues if needed. Here we evaluate drop-out based Bayesian uncertainty measures for DL in diagnosing diabetic retinopathy (DR) from fundus images and show that it captures uncertainty better than straightforward alternatives. Furthermore, we show that uncertainty informed decision referral can improve diagnostic performance. Experiments across different networks, tasks and datasets show robust generalization. Depending on network capacity and task/dataset difficulty, we surpass 85% sensitivity and 80% specificity as recommended by the NHS when referring 0−20% of the most uncertain decisions for further inspection. We analyse causes of uncertainty by relating intuitions from 2D visualizations to the high-dimensional image space. While uncertainty is sensitive to clinically relevant cases, sensitivity to unfamiliar data samples is task dependent, but can be rendered more robust.
Collapse
|
139
|
Tan JH, Fujita H, Sivaprasad S, Bhandary SV, Rao AK, Chua KC, Acharya UR. Automated segmentation of exudates, haemorrhages, microaneurysms using single convolutional neural network. Inf Sci (N Y) 2017. [DOI: 10.1016/j.ins.2017.08.050] [Citation(s) in RCA: 149] [Impact Index Per Article: 18.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
140
|
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42:60-88. [PMID: 28778026 DOI: 10.1016/j.media.2017.07.005] [Citation(s) in RCA: 4750] [Impact Index Per Article: 593.8] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 07/24/2017] [Accepted: 07/25/2017] [Indexed: 02/07/2023]
Affiliation(s)
- Geert Litjens
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Thijs Kooi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | | | - Francesco Ciompi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Mohsen Ghafoorian
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
141
|
Ordóñez PF, Cepeda CM, Garrido J, Chakravarty S. Classification of images based on small local features: a case applied to microaneurysms in fundus retina images. J Med Imaging (Bellingham) 2017; 4:041309. [PMID: 29201938 DOI: 10.1117/1.jmi.4.4.041309] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2017] [Accepted: 10/25/2017] [Indexed: 11/14/2022] Open
Abstract
Convolutional neural networks (CNNs), the state of the art in image classification, have proven to be as effective as an ophthalmologist, when detecting referable diabetic retinopathy. Having a size of [Formula: see text] of the total image, microaneurysms are early lesions in diabetic retinopathy that are difficult to classify. A model that includes two CNNs with different input image sizes, [Formula: see text] and [Formula: see text], was developed. These models were trained using the Kaggle and Messidor datasets and tested independently against the Kaggle dataset, showing a sensitivity [Formula: see text], a specificity [Formula: see text], and an area under the receiver operating characteristics curve [Formula: see text]. Furthermore, by combining these trained models, there was a reduction of false positives for complete images by about 50% and a sensitivity of 96% when tested against the DiaRetDB1 dataset. In addition, a powerful image preprocessing procedure was implemented, improving not only images for annotations, but also decreasing the number of epochs during training. Finally, a feedback method was developed increasing the accuracy of the CNN [Formula: see text] input model.
Collapse
Affiliation(s)
- Pablo F Ordóñez
- Kennesaw State University, College of Computing and Software Engineering, Marietta, Georgia, United States
| | - Carlos M Cepeda
- Kennesaw State University, College of Computing and Software Engineering, Marietta, Georgia, United States
| | - Jose Garrido
- Kennesaw State University, College of Computing and Software Engineering, Marietta, Georgia, United States
| | - Sumit Chakravarty
- Kennesaw State University, Department of Electrical Engineering, Marietta, Georgia, United States
| |
Collapse
|
142
|
Liefers B, Venhuizen FG, Schreur V, van Ginneken B, Hoyng C, Fauser S, Theelen T, Sánchez CI. Automatic detection of the foveal center in optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2017; 8:5160-5178. [PMID: 29188111 PMCID: PMC5695961 DOI: 10.1364/boe.8.005160] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2017] [Revised: 10/11/2017] [Accepted: 10/11/2017] [Indexed: 05/07/2023]
Abstract
We propose a method for automatic detection of the foveal center in optical coherence tomography (OCT). The method is based on a pixel-wise classification of all pixels in an OCT volume using a fully convolutional neural network (CNN) with dilated convolution filters. The CNN-architecture contains anisotropic dilated filters and a shortcut connection and has been trained using a dynamic training procedure where the network identifies its own relevant training samples. The performance of the proposed method is evaluated on a data set of 400 OCT scans of patients affected by age-related macular degeneration (AMD) at different severity levels. For 391 scans (97.75%) the method identified the foveal center with a distance to a human reference less than 750 μm, with a mean (± SD) distance of 71 μm ± 107 μm. Two independent observers also annotated the foveal center, with a mean distance to the reference of 57 μm ± 84 μm and 56 μm ± 80 μm, respectively. Furthermore, we evaluate variations to the proposed network architecture and training procedure, providing insight in the characteristics that led to the demonstrated performance of the proposed method.
Collapse
Affiliation(s)
- Bart Liefers
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Freerk G. Venhuizen
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Vivian Schreur
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Carel Hoyng
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Sascha Fauser
- Roche Pharma Research and Early Development, F. Hoffmann-La Roche Ltd, Basel,
Switzerland
- Cologne University Eye Clinic, Cologne,
Germany
| | - Thomas Theelen
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen,
the Netherlands
| | - Clara I. Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen,
the Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen,
the Netherlands
| |
Collapse
|
143
|
Application of deep convolutional neural network for automated detection of myocardial infarction using ECG signals. Inf Sci (N Y) 2017. [DOI: 10.1016/j.ins.2017.06.027] [Citation(s) in RCA: 474] [Impact Index Per Article: 59.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
|
144
|
Kooi T, Karssemeijer N. Classifying symmetrical differences and temporal change for the detection of malignant masses in mammography using deep neural networks. J Med Imaging (Bellingham) 2017; 4:044501. [PMID: 29021992 PMCID: PMC5633751 DOI: 10.1117/1.jmi.4.4.044501] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2017] [Accepted: 09/12/2017] [Indexed: 01/27/2023] Open
Abstract
We investigate the addition of symmetry and temporal context information to a deep convolutional neural network (CNN) with the purpose of detecting malignant soft tissue lesions in mammography. We employ a simple linear mapping that takes the location of a mass candidate and maps it to either the contralateral or prior mammogram, and regions of interest (ROIs) are extracted around each location. Two different architectures are subsequently explored: (1) a fusion model employing two datastreams where both ROIs are fed to the network during training and testing and (2) a stagewise approach where a single ROI CNN is trained on the primary image and subsequently used as a feature extractor for both primary and contralateral or prior ROIs. A "shallow" gradient boosted tree classifier is then trained on the concatenation of these features and used to classify the joint representation. The baseline yielded an AUC of 0.87 with confidence interval [0.853, 0.893]. For the analysis of symmetrical differences, the first architecture where both primary and contralateral patches are presented during training obtained an AUC of 0.895 with confidence interval [0.877, 0.913], and the second architecture where a new classifier is retrained on the concatenation an AUC of 0.88 with confidence interval [0.859, 0.9]. We found a significant difference between the first architecture and the baseline at high specificity with [Formula: see text]. When using the same architectures to analyze temporal change, we yielded an AUC of 0.884 with confidence interval [0.865, 0.902] for the first architecture and an AUC of 0.879 with confidence interval [0.858, 0.898] in the second setting. Although improvements for temporal analysis were consistent, they were not found to be significant. The results show our proposed method is promising and we suspect performance can greatly be improved when more temporal data become available.
Collapse
Affiliation(s)
- Thijs Kooi
- RadboudUMC Nijmegen, Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Nijmegen, The Netherlands
| | - Nico Karssemeijer
- RadboudUMC Nijmegen, Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Nijmegen, The Netherlands
| |
Collapse
|
145
|
Welikala RA, Foster PJ, Whincup PH, Rudnicka AR, Owen CG, Strachan DP, Barman SA. Automated arteriole and venule classification using deep learning for retinal images from the UK Biobank cohort. Comput Biol Med 2017; 90:23-32. [PMID: 28917120 DOI: 10.1016/j.compbiomed.2017.09.005] [Citation(s) in RCA: 56] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2017] [Revised: 09/05/2017] [Accepted: 09/05/2017] [Indexed: 01/12/2023]
Abstract
The morphometric characteristics of the retinal vasculature are associated with future risk of many systemic and vascular diseases. However, analysis of data from large population based studies is needed to help resolve uncertainties in some of these associations. This requires automated systems that extract quantitative measures of vessel morphology from large numbers of retinal images. Associations between retinal vessel morphology and disease precursors/outcomes may be similar or opposing for arterioles and venules. Therefore, the accurate detection of the vessel type is an important element in such automated systems. This paper presents a deep learning approach for the automatic classification of arterioles and venules across the entire retinal image, including vessels located at the optic disc. This comprises of a convolutional neural network whose architecture contains six learned layers: three convolutional and three fully-connected. Complex patterns are automatically learnt from the data, which avoids the use of hand crafted features. The method is developed and evaluated using 835,914 centreline pixels derived from 100 retinal images selected from the 135,867 retinal images obtained at the UK Biobank (large population-based cohort study of middle aged and older adults) baseline examination. This is a challenging dataset in respect to image quality and hence arteriole/venule classification is required to be highly robust. The method achieves a significant increase in accuracy of 8.1% when compared to the baseline method, resulting in an arteriole/venule classification accuracy of 86.97% (per pixel basis) over the entire retinal image.
Collapse
Affiliation(s)
- R A Welikala
- School of Computer Science and Mathematics, Kingston University, Surrey, KT1 2EE, United Kingdom.
| | - P J Foster
- NIHR Biomedical Research Centre, Moorfields Eye Hospital, London, EC1V 2PD, United Kingdom; UCL Institute of Ophthalmology, London, EC1V 9EL, United Kingdom
| | - P H Whincup
- Population Health Research Institute, St. George's, University of London, London, SW17 0RE, United Kingdom
| | - A R Rudnicka
- Population Health Research Institute, St. George's, University of London, London, SW17 0RE, United Kingdom
| | - C G Owen
- Population Health Research Institute, St. George's, University of London, London, SW17 0RE, United Kingdom
| | - D P Strachan
- Population Health Research Institute, St. George's, University of London, London, SW17 0RE, United Kingdom
| | - S A Barman
- School of Computer Science and Mathematics, Kingston University, Surrey, KT1 2EE, United Kingdom
| |
Collapse
|
146
|
Automated detection of arrhythmias using different intervals of tachycardia ECG segments with convolutional neural network. Inf Sci (N Y) 2017. [DOI: 10.1016/j.ins.2017.04.012] [Citation(s) in RCA: 419] [Impact Index Per Article: 52.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
147
|
Lee H, Tajmir S, Lee J, Zissen M, Yeshiwas BA, Alkasab TK, Choy G, Do S. Fully Automated Deep Learning System for Bone Age Assessment. J Digit Imaging 2017; 30:427-441. [PMID: 28275919 PMCID: PMC5537090 DOI: 10.1007/s10278-017-9955-8] [Citation(s) in RCA: 198] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
Skeletal maturity progresses through discrete phases, a fact that is used routinely in pediatrics where bone age assessments (BAAs) are compared to chronological age in the evaluation of endocrine and metabolic disorders. While central to many disease evaluations, little has changed to improve the tedious process since its introduction in 1950. In this study, we propose a fully automated deep learning pipeline to segment a region of interest, standardize and preprocess input radiographs, and perform BAA. Our models use an ImageNet pretrained, fine-tuned convolutional neural network (CNN) to achieve 57.32 and 61.40% accuracies for the female and male cohorts on our held-out test images. Female test radiographs were assigned a BAA within 1 year 90.39% and within 2 years 98.11% of the time. Male test radiographs were assigned 94.18% within 1 year and 99.00% within 2 years. Using the input occlusion method, attention maps were created which reveal what features the trained model uses to perform BAA. These correspond to what human experts look at when manually performing BAA. Finally, the fully automated BAA system was deployed in the clinical environment as a decision supporting system for more accurate and efficient BAAs at much faster interpretation time (<2 s) than the conventional method.
Collapse
Affiliation(s)
- Hyunkwang Lee
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Shahein Tajmir
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Jenny Lee
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Maurice Zissen
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Bethel Ayele Yeshiwas
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Tarik K. Alkasab
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Garry Choy
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Synho Do
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| |
Collapse
|
148
|
Cunefare D, Fang L, Cooper RF, Dubra A, Carroll J, Farsiu S. Open source software for automatic detection of cone photoreceptors in adaptive optics ophthalmoscopy using convolutional neural networks. Sci Rep 2017; 7:6620. [PMID: 28747737 PMCID: PMC5529414 DOI: 10.1038/s41598-017-07103-0] [Citation(s) in RCA: 59] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Accepted: 06/21/2017] [Indexed: 01/07/2023] Open
Abstract
Imaging with an adaptive optics scanning light ophthalmoscope (AOSLO) enables direct visualization of the cone photoreceptor mosaic in the living human retina. Quantitative analysis of AOSLO images typically requires manual grading, which is time consuming, and subjective; thus, automated algorithms are highly desirable. Previously developed automated methods are often reliant on ad hoc rules that may not be transferable between different imaging modalities or retinal locations. In this work, we present a convolutional neural network (CNN) based method for cone detection that learns features of interest directly from training data. This cone-identifying algorithm was trained and validated on separate data sets of confocal and split detector AOSLO images with results showing performance that closely mimics the gold standard manual process. Further, without any need for algorithmic modifications for a specific AOSLO imaging system, our fully-automated multi-modality CNN-based cone detection method resulted in comparable results to previous automatic cone segmentation methods which utilized ad hoc rules for different applications. We have made free open-source software for the proposed method and the corresponding training and testing datasets available online.
Collapse
Affiliation(s)
- David Cunefare
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA.
| | - Leyuan Fang
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA
| | - Robert F Cooper
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA, 19104, USA.,Department of Psychology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, CA, 94303, USA
| | - Joseph Carroll
- Department of Biomedical Engineering, Marquette University, Milwaukee, WI, 53233, USA.,Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA.,Department of Ophthalmology, Duke University Medical Center, Durham, NC, 27710, USA
| |
Collapse
|
149
|
Deep tessellated retinal image detection using Convolutional Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:676-680. [PMID: 29059963 DOI: 10.1109/embc.2017.8036915] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Tessellation in fundus is not only a visible feature for aged-related and myopic maculopathy but also confuse retinal vessel segmentation. The detection of tessellated images is an inevitable processing in retinal image analysis. In this work, we propose a model using convolutional neural network for detecting tessellated images. The input to the model is pre-processed fundus image, and the output indicate whether this photograph has tessellation or not. A database with 12,000 colour retinal images is collected to evaluate the classification performance. The best tessellation classifier achieves accuracy of 97.73% and AUC value of 0.9659 using pretrained GoogLeNet and transfer learning technique.
Collapse
|
150
|
Venhuizen FG, van Ginneken B, Liefers B, van Grinsven MJ, Fauser S, Hoyng C, Theelen T, Sánchez CI. Robust total retina thickness segmentation in optical coherence tomography images using convolutional neural networks. BIOMEDICAL OPTICS EXPRESS 2017; 8:3292-3316. [PMID: 28717568 PMCID: PMC5508829 DOI: 10.1364/boe.8.003292] [Citation(s) in RCA: 78] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2017] [Revised: 05/22/2017] [Accepted: 06/03/2017] [Indexed: 05/18/2023]
Abstract
We developed a fully automated system using a convolutional neural network (CNN) for total retina segmentation in optical coherence tomography (OCT) that is robust to the presence of severe retinal pathology. A generalized U-net network architecture was introduced to include the large context needed to account for large retinal changes. The proposed algorithm outperformed qualitative and quantitatively two available algorithms. The algorithm accurately estimated macular thickness with an error of 14.0 ± 22.1 µm, substantially lower than the error obtained using the other algorithms (42.9 ± 116.0 µm and 27.1 ± 69.3 µm, respectively). These results highlighted the proposed algorithm's capability of modeling the wide variability in retinal appearance and obtained a robust and reliable retina segmentation even in severe pathological cases.
Collapse
Affiliation(s)
- Freerk G. Venhuizen
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, the
Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the
Netherlands
| | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, the
Netherlands
| | - Bart Liefers
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, the
Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the
Netherlands
| | - Mark J.J.P. van Grinsven
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, the
Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the
Netherlands
| | - Sascha Fauser
- Roche Pharma Research and Early Development, F. Hoffmann-La Roche Ltd, Basel,
Switzerland
- Cologne University Eye Clinic, Cologne,
Germany
| | - Carel Hoyng
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the
Netherlands
| | - Thomas Theelen
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, the
Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the
Netherlands
| | - Clara I. Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, the
Netherlands
- Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the
Netherlands
| |
Collapse
|