251
|
Merveille O, Naegel B, Talbot H, Passat N. n D Variational Restoration of Curvilinear Structures With Prior-Based Directional Regularization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:3848-3859. [PMID: 30835221 DOI: 10.1109/tip.2019.2901706] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Curvilinear structure restoration in image processing procedures is a difficult task, which can be compounded when these structures are thin, i.e., when their smallest dimension is close to the resolution of the sensor. Many recent restoration methods involve considering a local gradient-based regularization term as prior, assuming gradient sparsity. An isotropic gradient operator is typically not suitable for thin curvilinear structures, since gradients are not sparse for these. In this paper, we propose a mixed gradient operator that combines a standard gradient in the isotropic image regions, and a directional gradient in the regions where specific orientations are likely. In particular, such information can be provided by curvilinear structure detectors (e.g., RORPO or Frangi filters). Our proposed mixed gradient operator, that can be viewed as a companion tool of such detectors, is proposed in a discrete framework and its formulation/computation holds in any dimension; in other words, it is valid in [Formula: see text], n ≥ 1 . We show how this mixed gradient can be used to construct image priors that take edge orientation, as well as intensity, into account, and then involved in various image processing tasks while preserving curvilinear structures. The experiments carried out on 2D, 3D, real, and synthetic images illustrate the relevance of the proposed gradient, and its use in variational frameworks for both denoising and segmentation tasks.
Collapse
|
252
|
Man Y, Huang Y, Feng J, Li X, Wu F. Deep Q Learning Driven CT Pancreas Segmentation With Geometry-Aware U-Net. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1971-1980. [PMID: 30998461 DOI: 10.1109/tmi.2019.2911588] [Citation(s) in RCA: 58] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The segmentation of pancreas is important for medical image analysis, yet it faces great challenges of class imbalance, background distractions, and non-rigid geometrical features. To address these difficulties, we introduce a deep Q network (DQN) driven approach with deformable U-Net to accurately segment the pancreas by explicitly interacting with contextual information and extract anisotropic features from pancreas. The DQN-based model learns a context-adaptive localization policy to produce a visually tightened and precise localization bounding box of the pancreas. Furthermore, deformable U-Net captures geometry-aware information of pancreas by learning geometrically deformable filters for feature extraction. The experiments on NIH dataset validate the effectiveness of the proposed framework in pancreas segmentation.
Collapse
|
253
|
Multilevel and Multiscale Deep Neural Network for Retinal Blood Vessel Segmentation. Symmetry (Basel) 2019. [DOI: 10.3390/sym11070946] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Abstract
Retinal blood vessel segmentation influences a lot of blood vessel-related disorders such as diabetic retinopathy, hypertension, cardiovascular and cerebrovascular disorders, etc. It is found that vessel segmentation using a convolutional neural network (CNN) showed increased accuracy in feature extraction and vessel segmentation compared to the classical segmentation algorithms. CNN does not need any artificial handcrafted features to train the network. In the proposed deep neural network (DNN), a better pre-processing technique and multilevel/multiscale deep supervision (DS) layers are being incorporated for proper segmentation of retinal blood vessels. From the first four layers of the VGG-16 model, multilevel/multiscale deep supervision layers are formed by convolving vessel-specific Gaussian convolutions with two different scale initializations. These layers output the activation maps that are capable to learn vessel-specific features at multiple scales, levels, and depth. Furthermore, the receptive field of these maps is increased to obtain the symmetric feature maps that provide the refined blood vessel probability map. This map is completely free from the optic disc, boundaries, and non-vessel background. The segmented results are tested on Digital Retinal Images for Vessel Extraction (DRIVE), STructured Analysis of the Retina (STARE), High-Resolution Fundus (HRF), and real-world retinal datasets to evaluate its performance. This proposed model achieves better sensitivity values of 0.8282, 0.8979 and 0.8655 in DRIVE, STARE and HRF datasets with acceptable specificity and accuracy performance metrics.
Collapse
|
254
|
Tmenova O, Martin R, Duong L. CycleGAN for style transfer in X-ray angiography. Int J Comput Assist Radiol Surg 2019; 14:1785-1794. [PMID: 31286396 DOI: 10.1007/s11548-019-02022-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Accepted: 06/25/2019] [Indexed: 11/30/2022]
Abstract
PURPOSE We aim to perform generation of angiograms for various vascular structures as a mean of data augmentation in learning tasks. The task is to enhance the realism of vessels images generated from an anatomically realistic cardiorespiratory simulator to make them look like real angiographies. METHODS The enhancement is performed by applying the CycleGAN deep network for transferring the style of real angiograms acquired during percutaneous interventions into a data set composed of realistically simulated arteries. RESULTS The cycle consistency was evaluated by comparing an input simulated image with the one obtained after two cycles of image translation. An average structural similarity (SSIM) of 0.948 on our data sets has been obtained. The vessel preservation was measured by comparing segmentations of an input image and its corresponding enhanced image using Dice coefficient. CONCLUSIONS We proposed an application of the CycleGAN deep network for enhancing the artificial data as an alternative to classical data augmentation techniques for medical applications, particularly focused on angiogram generation. We discussed success and failure cases, explaining conditions for the realistic data augmentation which respects both the complex physiology of arteries and the various patterns and textures generated by X-ray angiography.
Collapse
Affiliation(s)
- Oleksandra Tmenova
- Department of Software and IT Engineering, École de technologie supérieure., 1100 Notre-Dame W., Montreal, Canada. .,Taras Shevchenko National University of Kyiv, Volodymyrska St, 60, Kyiv, Ukraine.
| | - Rémi Martin
- Department of Software and IT Engineering, École de technologie supérieure., 1100 Notre-Dame W., Montreal, Canada
| | - Luc Duong
- Department of Software and IT Engineering, École de technologie supérieure., 1100 Notre-Dame W., Montreal, Canada
| |
Collapse
|
255
|
Ribalta Lorenzo P, Nalepa J, Bobek-Billewicz B, Wawrzyniak P, Mrukwa G, Kawulok M, Ulrych P, Hayball MP. Segmenting brain tumors from FLAIR MRI using fully convolutional neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 176:135-148. [PMID: 31200901 DOI: 10.1016/j.cmpb.2019.05.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2018] [Revised: 04/05/2019] [Accepted: 05/10/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Magnetic resonance imaging (MRI) is an indispensable tool in diagnosing brain-tumor patients. Automated tumor segmentation is being widely researched to accelerate the MRI analysis and allow clinicians to precisely plan treatment-accurate delineation of brain tumors is a critical step in assessing their volume, shape, boundaries, and other characteristics. However, it is still a very challenging task due to inherent MR data characteristics and high variability, e.g., in tumor sizes or shapes. We present a new deep learning approach for accurate brain tumor segmentation which can be trained from small and heterogeneous datasets annotated by a human reader (providing high-quality ground-truth segmentation is very costly in practice). METHODS In this paper, we present a new deep learning technique for segmenting brain tumors from fluid attenuation inversion recovery MRI. Our technique exploits fully convolutional neural networks, and it is equipped with a battery of augmentation techniques that make the algorithm robust against low data quality, and heterogeneity of small training sets. We train our models using only positive (tumorous) examples, due to the limited amount of available data. RESULTS Our algorithm was tested on a set of stage II-IV brain-tumor patients (image data collected using MAGNETOM Prisma 3T, Siemens). Rigorous experiments, backed up with statistical tests, revealed that our approach outperforms the state-of-the-art approach (utilizing hand-crafted features) in terms of segmentation accuracy, offers very fast training and instant segmentation (analysis of an image takes less than a second). Building our deep model is 1.3 times faster compared with extracting features for extremely randomized trees, and this training time can be controlled. Finally, we showed that too aggressive data augmentation may lead to deteriorated performance of the model, especially in the fixed-budget training (with maximum numbers of training epochs). CONCLUSIONS Our method yields the better performance when compared with the state of the art method which utilizes hand-crafted features. In addition, our deep network can be effectively applied to difficult (small, imbalanced, and heterogeneous) datasets, offers controllable training time, and infers in real-time.
Collapse
Affiliation(s)
| | - Jakub Nalepa
- Future Processing, Bojkowska 37A, 44-100 Gliwice, Poland; Institute of Informatics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland.
| | - Barbara Bobek-Billewicz
- Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology, Wybrzeze Armii Krajowej 15, 44-102 Gliwice, Poland.
| | - Pawel Wawrzyniak
- Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology, Wybrzeze Armii Krajowej 15, 44-102 Gliwice, Poland.
| | | | - Michal Kawulok
- Future Processing, Bojkowska 37A, 44-100 Gliwice, Poland; Institute of Informatics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland.
| | - Pawel Ulrych
- Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology, Wybrzeze Armii Krajowej 15, 44-102 Gliwice, Poland.
| | | |
Collapse
|
256
|
Kitrungrotsakul T, Han XH, Iwamoto Y, Lin L, Foruzan AH, Xiong W, Chen YW. VesselNet: A deep convolutional neural network with multi pathways for robust hepatic vessel segmentation. Comput Med Imaging Graph 2019; 75:74-83. [DOI: 10.1016/j.compmedimag.2019.05.002] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2018] [Revised: 03/20/2019] [Accepted: 05/13/2019] [Indexed: 11/26/2022]
|
257
|
Gu X, Wang J, Zhao J, Li Q. Segmentation and suppression of pulmonary vessels in low-dose chest CT scans. Med Phys 2019; 46:3603-3614. [PMID: 31240721 DOI: 10.1002/mp.13648] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Revised: 04/29/2019] [Accepted: 05/24/2019] [Indexed: 12/12/2022] Open
Abstract
PURPOSE The suppression of pulmonary vessels in chest computed tomography (CT) images can enhance the conspicuity of lung nodules, thereby improving the detection rate of early lung cancer. This study aimed to develop two key techniques in vessel suppression, that is, segmentation and removal of pulmonary vessels while preserving the nodules. METHODS Pulmonary vessel segmentation and removal methods in CT images were developed. The vessel segmentation method used a framework of two cascaded convolutional neural networks (CNNs). A bi-class segmentation network was utilized in the first step to extract high-intensity structures, including both vessels and nonvascular tissues such as nodules. A tri-class segmentation network was employed in the second step to distinguish the vessels from nonvascular tissues (mainly nodules) and the lung parenchyma. In the vessel removal method, the voxels in the segmented vessels were replaced with randomly selected voxels from the surrounding lung parenchyma. The dataset in this study comprised 50 three-dimensional (3D) low-dose chest CT images. The labels for vessel and nodule segmentation were annotated with a semi automatic approach. The two cascaded networks for pulmonary vessel segmentation were trained with CT images of 40 cases and tested with CT images of ten cases. Pulmonary vessels were removed from the ten testing scans based on the predicted segmentation results. In addition to qualitative evaluation to the effects of segmentation and removal, the segmentation results were quantitatively evaluated using Dice coefficient (DICE), Jaccard index (JAC), and volumetric similarity (VS) and the removal results were evaluated using contrast-to-noise ratio (CNR). RESULTS In the first step of vessel segmentation, the mean DICE, JAC, and VS for high-intensity tissues, including both vessels and nodules, were 0.943, 0.893, and 0.991, respectively. In the second step, all the nodules were separated from the vessels, and the mean DICE, JAC, and VS for the vessels were 0.941, 0.890, and 0.991, respectively. After vessel removal, the mean CNR for nodules was improved from 4.23 (6.26 dB) to 6.95 (8.42 dB). CONCLUSIONS Quantitative and qualitative evaluations demonstrated that the proposed method achieved a high accuracy for pulmonary vessel segmentation and a good effect on pulmonary vessel suppression.
Collapse
Affiliation(s)
- Xiaomeng Gu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.,Shanghai United Imaging Healthcare Co., Ltd., Shanghai, 201807, China
| | - Jiyong Wang
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai, 201807, China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Qiang Li
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai, 201807, China.,Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| |
Collapse
|
258
|
Binary Filter for Fast Vessel Pattern Extraction. Neural Process Lett 2019. [DOI: 10.1007/s11063-018-9866-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
259
|
Lee D, Kim H, Choi B, Kim HJ. Development of a deep neural network for generating synthetic dual-energy chest x-ray images with single x-ray exposure. Phys Med Biol 2019; 64:115017. [PMID: 31026841 DOI: 10.1088/1361-6560/ab1cee] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Dual-energy chest radiography (DECR) is a medical imaging technology that can improve diagnostic accuracy. This technique can decompose single-energy chest radiography (SECR) images into separate bone- and soft tissue-only images. This can, however, double the radiation exposure to the patient. To address this limitation, we developed an algorithm for the synthesis of DECR from a SECR through deep learning. To predict high resolution images, we developed a novel deep learning architecture by modifying a conventional U-net to take advantage of the high frequency-dominant information that propagates from the encoding part to the decoding part. In addition, we used the anticorrelated relationship (ACR) of DECR for improving the quality of the predicted images. For training data, 300 pairs of SECR and their corresponding DECR images were used. To test the trained model, 50 DECR images from Yonsei University Severance Hospital and 662 publicly accessible SECRs were used. To evaluate the performance of the proposed method, we compared DECR and predicted images using a structural similarity approach (SSIM). In addition, we quantitatively evaluated image quality calculating the modulation transfer function and coefficient of variation. The proposed model selectively predicted the bone- and soft tissue-only CR images from an SECR image. The strategy for improving the spatial resolution by ACR was effective. Quantitative evaluation showed that the proposed method with ACR showed relatively high SSIM (over 0.85). In addition, predicted images with the proposed ACR model achieved better image quality measures than those of U-net. In conclusion, the proposed method can obtain high-quality bone- and soft tissue-only CR images without the need for additional hardware for double x-ray exposures in clinical practice.
Collapse
Affiliation(s)
- Donghoon Lee
- Department of Radiation Convergence Engineering, Research Institute of Health Science, Yonsei University, 1 Yonseidae-gil, Wonju, Gangwon, Republic of Korea
| | | | | | | |
Collapse
|
260
|
Liu H, Wang L, Nan Y, Jin F, Wang Q, Pu J. SDFN: Segmentation-based deep fusion network for thoracic disease classification in chest X-ray images. Comput Med Imaging Graph 2019; 75:66-73. [PMID: 31174100 DOI: 10.1016/j.compmedimag.2019.05.005] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Revised: 04/15/2019] [Accepted: 05/24/2019] [Indexed: 10/26/2022]
Abstract
This study aims to automatically diagnose thoracic diseases depicted on the chest x-ray (CXR) images using deep convolutional neural networks. The existing methods generally used the entire CXR images for training purposes, but this strategy may suffer from two drawbacks. First, potential misalignment or the existence of irrelevant objects in the entire CXR images may cause unnecessary noise and thus limit the network performance. Second, the relatively low image resolution caused by the resizing operation, which is a common pre-processing procedure for training neural networks, may lead to the loss of image details, making it difficult to detect pathologies with small lesion regions. To address these issues, we present a novel method termed as segmentation-based deep fusion network (SDFN), which leverages the domain knowledge and the higher-resolution information of local lung regions. Specifically, the local lung regions were identified and cropped by the Lung Region Generator (LRG). Two CNN-based classification models were then used as feature extractors to obtain the discriminative features of the entire CXR images and the cropped lung region images. Lastly, the obtained features were fused by the feature fusion module for disease classification. Evaluated by the NIH benchmark split on the Chest X-ray 14 Dataset, our experimental result demonstrated that the developed method achieved more accurate disease classification compared with the available approaches via the receiver operating characteristic (ROC) analyses. It was also found that the SDFN could localize the lesion regions more precisely as compared to the traditional method.
Collapse
Affiliation(s)
- Han Liu
- Department of Bioengineering and Radiology, University of Pittsburgh, PA, 15213.
| | - Lei Wang
- Department of Bioengineering and Radiology, University of Pittsburgh, PA, 15213.
| | - Yandong Nan
- Department of Respiratory and Critical Care Medicine, Tangdu Hospital, Xi'an, 710038, China
| | - Faguang Jin
- Department of Respiratory and Critical Care Medicine, Tangdu Hospital, Xi'an, 710038, China.
| | - Qi Wang
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Hebei, 050020, China
| | - Jiantao Pu
- Department of Bioengineering and Radiology, University of Pittsburgh, PA, 15213.
| |
Collapse
|
261
|
Kassim YM, Glinskii OV, Glinsky VV, Huxley VH, Palaniappan K. Patch-Based Semantic Segmentation for Detecting Arterioles and Venules in Epifluorescence Imagery. IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP : [PROCEEDINGS]. IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP 2019; 2018. [PMID: 32123642 DOI: 10.1109/aipr.2018.8707387] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Segmentation and quantification of microvasculature structures are the main steps toward studying microvasculature remodeling. The proposed patch based semantic architecture enables accurate segmentation for the challenging epifluorescence microscopy images. Our pixel-based fast semantic network trained on random patches from different epifluorescence images to learn how to discriminate between vessels versus nonvessels pixels. The proposed semantic vessel network (SVNet) relies on understanding the morphological structure of the thin vessels in the patches rather than considering the whole image as input to speed up the training process and to maintain the clarity of thin structures. Experimental results on our ovariectomized - ovary removed (OVX) - mice dura mater epifluorescence microscopy images shows promising results in both arteriole and venule part. We compared our results with different segmentation methods such as local, global thresholding, matched based filter approaches and related state of the art deep learning networks. Our overall accuracy (> 98%) outperforms all the methods including our previous work (VNet). [1].
Collapse
Affiliation(s)
- Yasmin M Kassim
- Computational Imaging and VisAnalysis (CIVA) Lab, Department of Electrical Engineering and Computer Science, University of Missouri-Columbia, Columbia, MO 65211 USA
| | - Olga V Glinskii
- Research Service, Harry S. Truman Memorial Veterans Hospital, Columbia, MO 65201 USA.,Department of Medical Pharmacology and Physiology, University of Missouri-Columbia, Columbia, MO 65211 USA
| | - Vladislav V Glinsky
- Research Service, Harry S. Truman Memorial Veterans Hospital, Columbia, MO 65201 USA.,Department of Pathology and Anatomical Sciences, University of Missouri-Columbia, Columbia, MO 65211 USA
| | - Virginia H Huxley
- Department of Medical Pharmacology and Physiology, University of Missouri-Columbia, Columbia, MO 65211 USA.,National Center for Gender Physiology, University of Missouri-Columbia, MO 65211 USA
| | - Kannappan Palaniappan
- Computational Imaging and VisAnalysis (CIVA) Lab, Department of Electrical Engineering and Computer Science, University of Missouri-Columbia, Columbia, MO 65211 USA
| |
Collapse
|
262
|
Jiang H, Chen X, Shi F, Ma Y, Xiang D, Ye L, Su J, Li Z, Chen Q, Hua Y, Xu X, Zhu W, Fan Y. Improved cGAN based linear lesion segmentation in high myopia ICGA images. BIOMEDICAL OPTICS EXPRESS 2019; 10:2355-2366. [PMID: 31149376 PMCID: PMC6524580 DOI: 10.1364/boe.10.002355] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Revised: 03/13/2019] [Accepted: 04/08/2019] [Indexed: 05/23/2023]
Abstract
The increasing prevalence of myopia has attracted global attention recently. Linear lesions including lacquer cracks and myopic stretch lines are the main signs in high myopia retinas, and can be revealed by indocyanine green angiography (ICGA). Automatic linear lesion segmentation in ICGA images can help doctors diagnose and analyze high myopia quantitatively. To achieve accurate segmentation of linear lesions, an improved conditional generative adversarial network (cGAN) based method is proposed. A new partial densely connected network is adopted as the generator of cGAN to encourage the reuse of features and make the network time-saving. Dice loss and weighted binary cross-entropy loss are added to solve the data imbalance problem. Experiments on our data set indicated that the proposed network achieved better performance compared to other networks.
Collapse
Affiliation(s)
- Hongjiu Jiang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
- contributed equally
| | - Xinjian Chen
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
- State Key Laboratory of Radiation Medicine and Protection, Soochow University, Suzhou 215123, China
- contributed equally
| | - Fei Shi
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
- Collaborative Innovation Center of IoT Industrialization and Intelligent Production, Minjiang University, Fuzhou 350108, China
| | - Yuhui Ma
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| | - Dehui Xiang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| | - Lei Ye
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| | - Jinzhu Su
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| | - Zuoyong Li
- Collaborative Innovation Center of IoT Industrialization and Intelligent Production, Minjiang University, Fuzhou 350108, China
| | - Qiuying Chen
- Shanghai General Hospital, Shanghai 200080, China
| | - Yihong Hua
- Shanghai General Hospital, Shanghai 200080, China
| | - Xun Xu
- Shanghai General Hospital, Shanghai 200080, China
| | - Weifang Zhu
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
- Collaborative Innovation Center of IoT Industrialization and Intelligent Production, Minjiang University, Fuzhou 350108, China
| | - Ying Fan
- Shanghai General Hospital, Shanghai 200080, China
| |
Collapse
|
263
|
Finger-Vein Verification Based on LSTM Recurrent Neural Networks. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9081687] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Finger-vein biometrics has been extensively investigated for personal verification. A challenge is that the finger-vein acquisition is affected by many factors, which results in many ambiguous regions in the finger-vein image. Generally, the separability between vein and background is poor in such regions. Despite recent advances in finger-vein pattern segmentation, current solutions still lack the robustness to extract finger-vein features from raw images because they do not take into account the complex spatial dependencies of vein pattern. This paper proposes a deep learning model to extract vein features by combining the Convolutional Neural Networks (CNN) model and Long Short-Term Memory (LSTM) model. Firstly, we automatically assign the label based on a combination of known state of the art handcrafted finger-vein image segmentation techniques, and generate various sequences for each labeled pixel along different directions. Secondly, several Stacked Convolutional Neural Networks and Long Short-Term Memory (SCNN-LSTM) models are independently trained on the resulting sequences. The outputs of various SCNN-LSTMs form a complementary and over-complete representation and are conjointly put into Probabilistic Support Vector Machine (P-SVM) to predict the probability of each pixel of being foreground (i.e., vein pixel) given several sequences centered on it. Thirdly, we propose a supervised encoding scheme to extract the binary vein texture. A threshold is automatically computed by taking into account the maximal separation between the inter-class distance and the intra-class distance. In our approach, the CNN learns robust features for vein texture pattern representation and LSTM stores the complex spatial dependencies of vein patterns. So, the pixels in any region of a test image can then be classified effectively. In addition, the supervised information is employed to encode the vein patterns, so the resulting encoding images contain more discriminating features. The experimental results on one public finger-vein database show that the proposed approach significantly improves the finger-vein verification accuracy.
Collapse
|
264
|
Hu X, Yi W, Jiang L, Wu S, Zhang Y, Du J, Ma T, Wang T, Wu X. Classification of Metaphase Chromosomes Using Deep Convolutional Neural Network. J Comput Biol 2019; 26:473-484. [PMID: 30977675 DOI: 10.1089/cmb.2018.0212] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023] Open
Abstract
Karyotype analysis has important clinical significance in the diagnosis, treatment, and prognosis of diseases such as birth defects and hematological tumors. Identifying chromosomes and their structure variations from G-banded metaphase images is an important process in karyotyping, and also is the most difficult one. Automatic chromosome classification becomes urgent in recent years since more and more samples of patients are subject to medical test such as bone marrow biopsy. With the development of artificial intelligence, convolutional neural networks (CNNs) have shown good performance in image recognition. In this study, a CNN with 6 convolutional layers, 3 pooling layers, 4 dropout layers, and 2 fully connected layers was trained using the labeled data set to classify chromosomes into 24 classes through softmax activation function mapping. The classifier gave an accuracy of 93.79% for chromosome identification. The result demonstrated that the CNN has potential application value in chromosome classification and will contribute to the construction of an automatic karyotyping platform.
Collapse
Affiliation(s)
- Xi Hu
- 1 The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Sciences and Technology, Xi'an Jiaotong University, Xi'an, China
| | - Wenling Yi
- 2 The Brain Cognition and Brain Disease Institute (BCBDI), Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ling Jiang
- 3 School of Life Sciences and Technology, Xidian University, Xi'an, China
| | - Sijia Wu
- 3 School of Life Sciences and Technology, Xidian University, Xi'an, China
| | - Yan Zhang
- 4 Lu Daopei Institute of Hematology, Hebei Yanda Lu Daopei Hospital, Langfang, China
| | - Jianqiang Du
- 1 The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Sciences and Technology, Xi'an Jiaotong University, Xi'an, China
| | - Tianyou Ma
- 5 Institute of Endemic Diseases, Environment Related Gene Key Laboratory of Ministry of Education, Xi'an Jiaotong University, Xi'an, China
| | - Tong Wang
- 4 Lu Daopei Institute of Hematology, Hebei Yanda Lu Daopei Hospital, Langfang, China
| | - Xiaoming Wu
- 1 The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Sciences and Technology, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
265
|
Zhao G, Liu G, Fang L, Tu B, Ghamisi P. Multiple convolutional layers fusion framework for hyperspectral image classification. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.02.019] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
266
|
Guo S, Wang K, Kang H, Zhang Y, Gao Y, Li T. BTS-DSN: Deeply supervised neural network with short connections for retinal vessel segmentation. Int J Med Inform 2019; 126:105-113. [PMID: 31029251 DOI: 10.1016/j.ijmedinf.2019.03.015] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Revised: 01/31/2019] [Accepted: 03/25/2019] [Indexed: 11/26/2022]
Abstract
BACKGROUND AND OBJECTIVE The condition of vessel of the human eye is an important factor for the diagnosis of ophthalmological diseases. Vessel segmentation in fundus images is a challenging task due to complex vessel structure, the presence of similar structures such as microaneurysms and hemorrhages, micro-vessel with only one to several pixels wide, and requirements for finer results. METHODS In this paper, we present a multi-scale deeply supervised network with short connections (BTS-DSN) for vessel segmentation. We used short connections to transfer semantic information between side-output layers. Bottom-top short connections pass low level semantic information to high level for refining results in high-level side-outputs, and top-bottom short connection passes much structural information to low level for reducing noises in low-level side-outputs. In addition, we employ cross-training to show that our model is suitable for real world fundus images. RESULTS The proposed BTS-DSN has been verified on DRIVE, STARE and CHASE_DB1 datasets, and showed competitive performance over other state-of-the-art methods. Specially, with patch level input, the network achieved 0.7891/0.8212 sensitivity, 0.9804/0.9843 specificity, 0.9806/0.9859 AUC, and 0.8249/0.8421 F1-score on DRIVE and STARE, respectively. Moreover, our model behaves better than other methods in cross-training experiments. CONCLUSIONS BTS-DSN achieves competitive performance in vessel segmentation task on three public datasets. It is suitable for vessel segmentation. The source code of our method is available at: https://github.com/guomugong/BTS-DSN.
Collapse
Affiliation(s)
- Song Guo
- Nankai University, Tianjin, China
| | - Kai Wang
- Nankai University, Tianjin, China; KLMDASR, Tianjin, China
| | - Hong Kang
- Nankai University, Tianjin, China; Beijing Shanggong Medical Technology Co. Ltd, China
| | - Yujun Zhang
- Institute of Computing Technology, Chinese Academy, China
| | | | - Tao Li
- Nankai University, Tianjin, China.
| |
Collapse
|
267
|
Mehrtash A, Ghafoorian M, Pernelle G, Ziaei A, Heslinga FG, Tuncali K, Fedorov A, Kikinis R, Tempany CM, Wells WM, Abolmaesumi P, Kapur T. Automatic Needle Segmentation and Localization in MRI With 3-D Convolutional Neural Networks: Application to MRI-Targeted Prostate Biopsy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1026-1036. [PMID: 30334789 PMCID: PMC6450731 DOI: 10.1109/tmi.2018.2876796] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Image guidance improves tissue sampling during biopsy by allowing the physician to visualize the tip and trajectory of the biopsy needle relative to the target in MRI, CT, ultrasound, or other relevant imagery. This paper reports a system for fast automatic needle tip and trajectory localization and visualization in MRI that has been developed and tested in the context of an active clinical research program in prostate biopsy. To the best of our knowledge, this is the first reported system for this clinical application and also the first reported system that leverages deep neural networks for segmentation and localization of needles in MRI across biomedical applications. Needle tip and trajectory were annotated on 583 T2-weighted intra-procedural MRI scans acquired after needle insertion for 71 patients who underwent transperineal MRI-targeted biopsy procedure at our institution. The images were divided into two independent training-validation and test sets at the patient level. A deep 3-D fully convolutional neural network model was developed, trained, and deployed on these samples. The accuracy of the proposed method, as tested on previously unseen data, was 2.80-mm average in needle tip detection and 0.98° in needle trajectory angle. An observer study was designed in which independent annotations by a second observer, blinded to the original observer, were compared with the output of the proposed method. The resultant error was comparable to the measured inter-observer concordance, reinforcing the clinical acceptability of the proposed method. The proposed system has the potential for deployment in clinical routine.
Collapse
Affiliation(s)
- Alireza Mehrtash
- Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC, V6T 1Z4, Canada
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| | | | | | - Alireza Ziaei
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| | - Friso G. Heslinga
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| | - Kemal Tuncali
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| | - Andriy Fedorov
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| | - Ron Kikinis
- Department of Computer Science at the University of Bremen, Bremen, Germany
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| | - Clare M. Tempany
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| | - William M. Wells
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| | - Purang Abolmaesumi
- Department of Electrical and Computer Engineering, The University of British Columbia Vancouver, BC, V5T 1Z4, Canada
| | - Tina Kapur
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA
| |
Collapse
|
268
|
Wang W, Wang W, Hu Z. Segmenting retinal vessels with revised top-bottom-hat transformation and flattening of minimum circumscribed ellipse. Med Biol Eng Comput 2019; 57:1481-1496. [DOI: 10.1007/s11517-019-01967-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2018] [Accepted: 02/23/2019] [Indexed: 11/29/2022]
|
269
|
Girard F, Kavalec C, Cheriet F. Joint segmentation and classification of retinal arteries/veins from fundus images. Artif Intell Med 2019; 94:96-109. [DOI: 10.1016/j.artmed.2019.02.004] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2017] [Revised: 08/09/2018] [Accepted: 02/17/2019] [Indexed: 11/17/2022]
|
270
|
Remedios S, Roy S, Blaber J, Bermudez C, Nath V, Patel MB, Butman JA, Landman BA, Pham DL. Distributed deep learning for robust multi-site segmentation of CT imaging after traumatic brain injury. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10949:109490A. [PMID: 31602089 PMCID: PMC6786776 DOI: 10.1117/12.2511997] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Machine learning models are becoming commonplace in the domain of medical imaging, and with these methods comes an ever-increasing need for more data. However, to preserve patient anonymity it is frequently impractical or prohibited to transfer protected health information (PHI) between institutions. Additionally, due to the nature of some studies, there may not be a large public dataset available on which to train models. To address this conundrum, we analyze the efficacy of transferring the model itself in lieu of data between different sites. By doing so we accomplish two goals: 1) the model gains access to training on a larger dataset that it could not normally obtain and 2) the model better generalizes, having trained on data from separate locations. In this paper, we implement multi-site learning with disparate datasets from the National Institutes of Health (NIH) and Vanderbilt University Medical Center (VUMC) without compromising PHI. Three neural networks are trained to convergence on a computed tomography (CT) brain hematoma segmentation task: one only with NIH data, one only with VUMC data, and one multi-site model alternating between NIH and VUMC data. Resultant lesion masks with the multi-site model attain an average Dice similarity coefficient of 0.64 and the automatically segmented hematoma volumes correlate to those done manually with a Pearson correlation coefficient of 0.87, corresponding to an 8% and 5% improvement, respectively, over the single-site model counterparts.
Collapse
Affiliation(s)
- Samuel Remedios
- Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation
- Radiology and Imaging Sciences, Clinical Center, National Institute of Health
- Department of Computer Science, Middle Tennessee State University
- Department of Electrical Engineering, Vanderbilt University
| | - Snehashis Roy
- Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation
- Radiology and Imaging Sciences, Clinical Center, National Institute of Health
| | - Justin Blaber
- Department of Electrical Engineering, Vanderbilt University
| | | | - Vishwesh Nath
- Department of Computer Science, Vanderbilt University
| | - Mayur B Patel
- Departments of Surgery, Neurosurgery, Hearing & Speech Sciences; Center for Health Services Research, Vanderbilt Brain Institute; Critical Illness, Brain Dysfunction, and Survivorship Center, Vanderbilt University Medical Center; VA Tennessee Valley Healthcare System, Department of Veterans Affairs Medical Center
| | - John A Butman
- Radiology and Imaging Sciences, Clinical Center, National Institute of Health
| | - Bennett A Landman
- Department of Electrical Engineering, Vanderbilt University
- Department of Biomedical Engineering, Vanderbilt University
- Department of Computer Science, Vanderbilt University
| | - Dzung L Pham
- Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation
- Radiology and Imaging Sciences, Clinical Center, National Institute of Health
| |
Collapse
|
271
|
Qin B, Jin M, Hao D, Lv Y, Liu Q, Zhu Y, Ding S, Zhao J, Fei B. Accurate vessel extraction via tensor completion of background layer in X-ray coronary angiograms. PATTERN RECOGNITION 2019; 87:38-54. [PMID: 31447490 PMCID: PMC6708416 DOI: 10.1016/j.patcog.2018.09.015] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This paper proposes an effective method for accurately recovering vessel structures and intensity information from the X-ray coronary angiography (XCA) images of moving organs or tissues. Specifically, a global logarithm transformation of XCA images is implemented to fit the X-ray attenuation sum model of vessel/background layers into a low-rank, sparse decomposition model for vessel/background separation. The contrast-filled vessel structures are extracted by distinguishing the vessels from the low-rank backgrounds by using a robust principal component analysis and by constructing a vessel mask via Radon-like feature filtering plus spatially adaptive thresholding. Subsequently, the low-rankness and inter-frame spatio-temporal connectivity in the complex and noisy backgrounds are used to recover the vessel-masked background regions using tensor completion of all other background regions, while the twist tensor nuclear norm is minimized to complete the background layers. Finally, the method is able to accurately extract vessels' intensities from the noisy XCA data by subtracting the completed background layers from the overall XCA images. We evaluated the vessel visibility of resulting images on real X-ray angiography data and evaluated the accuracy of vessel intensity recovery on synthetic data. Experiment results show the superiority of the proposed method over the state-of-the-art methods.
Collapse
Affiliation(s)
- Binjie Qin
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Mingxin Jin
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Dongdong Hao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yisong Lv
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China
| | - Yueqi Zhu
- Department of Radiology, Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Shanghai Jiao Tong University, 600 Yi Shan Road, Shanghai 200233, China
| | - Song Ding
- Department of Cardiology, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Baowei Fei
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, University of Texas at Dallas, Richardson, TX 75080, USA
| |
Collapse
|
272
|
Masood S, Fang R, Li P, Li H, Sheng B, Mathavan A, Wang X, Yang P, Wu Q, Qin J, Jia W. Automatic Choroid Layer Segmentation from Optical Coherence Tomography Images Using Deep Learning. Sci Rep 2019; 9:3058. [PMID: 30816296 PMCID: PMC6395677 DOI: 10.1038/s41598-019-39795-x] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2018] [Accepted: 01/21/2019] [Indexed: 01/03/2023] Open
Abstract
The choroid layer is a vascular layer in human retina and its main function is to provide oxygen and support to the retina. Various studies have shown that the thickness of the choroid layer is correlated with the diagnosis of several ophthalmic diseases. For example, diabetic macular edema (DME) is a leading cause of vision loss in patients with diabetes. Despite contemporary advances, automatic segmentation of the choroid layer remains a challenging task due to low contrast, inhomogeneous intensity, inconsistent texture and ambiguous boundaries between the choroid and sclera in Optical Coherence Tomography (OCT) images. The majority of currently implemented methods manually or semi-automatically segment out the region of interest. While many fully automatic methods exist in the context of choroid layer segmentation, more effective and accurate automatic methods are required in order to employ these methods in the clinical sector. This paper proposed and implemented an automatic method for choroid layer segmentation in OCT images using deep learning and a series of morphological operations. The aim of this research was to segment out Bruch's Membrane (BM) and choroid layer to calculate the thickness map. BM was segmented using a series of morphological operations, whereas the choroid layer was segmented using a deep learning approach as more image statistics were required to segment accurately. Several evaluation metrics were used to test and compare the proposed method against other existing methodologies. Experimental results showed that the proposed method greatly reduced the error rate when compared with the other state-of-the-art methods.
Collapse
Affiliation(s)
- Saleha Masood
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Ruogu Fang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Ping Li
- Faculty of Information Technology, Macau University of Science and Technology, Macau, 999078, China
| | - Huating Li
- Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, 200233, China
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Akash Mathavan
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Xiangning Wang
- Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, 200233, China
| | - Po Yang
- Department of Computer Science, Liverpool John Moores University, Liverpool, L3 3AF, UK
| | - Qiang Wu
- Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, 200233, China.
| | - Jing Qin
- Centre for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, 999077, China
| | - Weiping Jia
- Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, 200233, China
| |
Collapse
|
273
|
Leopold HA, Orchard J, Zelek JS, Lakshminarayanan V. PixelBNN: Augmenting the PixelCNN with Batch Normalization and the Presentation of a Fast Architecture for Retinal Vessel Segmentation. J Imaging 2019; 5:jimaging5020026. [PMID: 34460474 PMCID: PMC8320904 DOI: 10.3390/jimaging5020026] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Revised: 01/05/2019] [Accepted: 01/24/2019] [Indexed: 01/06/2023] Open
Abstract
Analysis of retinal fundus images is essential for eye-care physicians in the diagnosis, care and treatment of patients. Accurate fundus and/or retinal vessel maps give rise to longitudinal studies able to utilize multimedia image registration and disease/condition status measurements, as well as applications in surgery preparation and biometrics. The segmentation of retinal morphology has numerous applications in assessing ophthalmologic and cardiovascular disease pathologies. Computer-aided segmentation of the vasculature has proven to be a challenge, mainly due to inconsistencies such as noise and variations in hue and brightness that can greatly reduce the quality of fundus images. The goal of this work is to collate different key performance indicators (KPIs) and state-of-the-art methods applied to this task, frame computational efficiency–performance trade-offs under varying degrees of information loss using common datasets, and introduce PixelBNN, a highly efficient deep method for automating the segmentation of fundus morphologies. The model was trained, tested and cross tested on the DRIVE, STARE and CHASE_DB1 retinal vessel segmentation datasets. Performance was evaluated using G-mean, Mathews Correlation Coefficient and F1-score, with the main success measure being computation speed. The network was 8.5× faster than the current state-of-the-art at test time and performed comparatively well, considering a 5× to 19× reduction in information from resizing images during preprocessing.
Collapse
Affiliation(s)
- Henry A. Leopold
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
- Correspondence:
| | - Jeff Orchard
- David R. Cheriton School of Computer Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - John S. Zelek
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | | |
Collapse
|
274
|
Akbar S, Sharif M, Akram MU, Saba T, Mahmood T, Kolivand M. Automated techniques for blood vessels segmentation through fundus retinal images: A review. Microsc Res Tech 2019; 82:153-170. [DOI: 10.1002/jemt.23172] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2018] [Revised: 09/26/2018] [Accepted: 10/17/2018] [Indexed: 11/09/2022]
Affiliation(s)
- Shahzad Akbar
- Department of Computer ScienceCOMSATS University Islamabad, Wah Campus Wah Pakistan
| | - Muhammad Sharif
- Department of Computer ScienceCOMSATS University Islamabad, Wah Campus Wah Pakistan
| | - Muhammad Usman Akram
- Department of Computer EngineeringCollege of E&ME, National University of Sciences and Technology Islamabad Pakistan
| | - Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| | - Toqeer Mahmood
- Department of Computer ScienceUniversity of Engineering and Technology Taxila Pakistan
| | | |
Collapse
|
275
|
Vessel-Net: Retinal Vessel Segmentation Under Multi-path Supervision. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32239-7_30] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
276
|
Deep Convolutional Neural Network-Based Diabetic Retinopathy Detection in Digital Fundus Images. ADVANCES IN INTELLIGENT SYSTEMS AND COMPUTING 2019. [DOI: 10.1007/978-981-13-3600-3_19] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
|
277
|
Alom MZ, Yakopcic C, Hasan M, Taha TM, Asari VK. Recurrent residual U-Net for medical image segmentation. J Med Imaging (Bellingham) 2019; 6:014006. [PMID: 30944843 PMCID: PMC6435980 DOI: 10.1117/1.jmi.6.1.014006] [Citation(s) in RCA: 230] [Impact Index Per Article: 38.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Accepted: 03/05/2019] [Indexed: 12/12/2022] Open
Abstract
Deep learning (DL)-based semantic segmentation methods have been providing state-of-the-art performance in the past few years. More specifically, these techniques have been successfully applied in medical image classification, segmentation, and detection tasks. One DL technique, U-Net, has become one of the most popular for these applications. We propose a recurrent U-Net model and a recurrent residual U-Net model, which are named RU-Net and R2U-Net, respectively. The proposed models utilize the power of U-Net, residual networks, and recurrent convolutional neural networks. There are several advantages to using these proposed architectures for segmentation tasks. First, a residual unit helps when training deep architectures. Second, feature accumulation with recurrent residual convolutional layers ensures better feature representation for segmentation tasks. Third, it allows us to design better U-Net architectures with the same number of network parameters with better performance for medical image segmentation. The proposed models are tested on three benchmark datasets, such as blood vessel segmentation in retinal images, skin cancer segmentation, and lung lesion segmentation. The experimental results show superior performance on segmentation tasks compared to equivalent models, including a variant of a fully connected convolutional neural network called SegNet, U-Net, and residual U-Net.
Collapse
Affiliation(s)
- Md Zahangir Alom
- University of Dayton, Department of Electrical and Computer Engineering, Dayton, Ohio, United States
| | - Chris Yakopcic
- University of Dayton, Department of Electrical and Computer Engineering, Dayton, Ohio, United States
| | | | - Tarek M. Taha
- University of Dayton, Department of Electrical and Computer Engineering, Dayton, Ohio, United States
| | - Vijayan K. Asari
- University of Dayton, Department of Electrical and Computer Engineering, Dayton, Ohio, United States
| |
Collapse
|
278
|
Deep Vesselness Measure from Scale-Space Analysis of Hessian Matrix Eigenvalues. PATTERN RECOGNITION AND IMAGE ANALYSIS 2019. [DOI: 10.1007/978-3-030-31321-0_41] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
279
|
CS-Net: Channel and Spatial Attention Network for Curvilinear Structure Segmentation. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32239-7_80] [Citation(s) in RCA: 70] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
280
|
Fan Z, Lu J, Wei C, Huang H, Cai X, Chen X. A Hierarchical Image Matting Model for Blood Vessel Segmentation in Fundus Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 28:2367-2377. [PMID: 30571623 DOI: 10.1109/tip.2018.2885495] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In this paper, a hierarchical image matting model is proposed to extract blood vessels from fundus images. More specifically, a hierarchical strategy is integrated into the image matting model for blood vessel segmentation. Normally the matting models require a user specified trimap, which separates the input image into three regions: the foreground, background and unknown regions. However, creating a user specified trimap is laborious for vessel segmentation tasks. In this paper, we propose a method that first generates trimap automatically by utilizing region features of blood vessels, then applies a hierarchical image matting model to extract the vessel pixels from the unknown regions. The proposed method has low calculation time and outperforms many other state-of-art supervised and unsupervised methods. It achieves a vessel segmentation accuracy of 96.0%, 95.7% and 95.1% in an average time of 10.72s, 15.74s and 50.71s on images from three publicly available fundus image datasets DRIVE, STARE, and CHASE DB1, respectively.
Collapse
|
281
|
Guo Y, Budak Ü, Şengür A. A novel retinal vessel detection approach based on multiple deep convolution neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 167:43-48. [PMID: 30501859 DOI: 10.1016/j.cmpb.2018.10.021] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Revised: 10/12/2018] [Accepted: 10/29/2018] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer aided detection (CAD) offers an efficient way to assist doctors to interpret fundus images. In a CAD system, retinal vessel (RV) detection is a crucial step to identify the retinal disease regions. However, RV detection is still a challenging problem due to variations in morphology of the vessels on noisy and low contrast fundus images. METHODS In this paper, we formulate the detection task as a classification problem and solve it using a multiple classifier framework based on deep convolutional neural networks. The multiple deep convolutional neural network (MDCNN) is constructed and trained on fundus images with limited image quantity. The MDCNN is trained using an incremental learning strategy to improve the networks' performance. The final classification results are obtained from the voting procedure on the results of MDCNN. RESULTS The MDCNN achieves better performance and significantly outperforms the state-of-the-art for automatic retinal vessel segmentation on the DRIVE dataset with 95.97% and 96.13% accuracy and 0.9726 and 0.9737 AUC (area below the operator receiver character curve) score on training and testing sets, respectively. Another public dataset, STARE, is also used to evaluate the proposed network. The experimental results demonstrate that the proposed MDCNN network achieves 95.39% accuracy and 0.9539 AUC score in STARE dataset. We further compare our result with several state-of-the-art methods based on AUC values. The comparison is shown that our proposal yields the third best AUC value. CONCLUSIONS Our method yields the better performance in the compared the state of the art methods. In addition, our proposal has no preprocessing stage, and the input color fundus images are fed into the CNN directly.
Collapse
Affiliation(s)
- Yanhui Guo
- Department of Computer Science, University of Illinois, Springfield, IL, USA.
| | - Ümit Budak
- Department of Electrical-Electronics Engineering, Bitlis Eren University, Bitlis, Turkey
| | - Abdulkadir Şengür
- Electrical and Electronics Engineering Department, Firat University, Elazig, Turkey
| |
Collapse
|
282
|
A Coarse-to-Fine Fully Convolutional Neural Network for Fundus Vessel Segmentation. Symmetry (Basel) 2018. [DOI: 10.3390/sym10110607] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Fundus vessel analysis is a significant tool for evaluating the development of retinal diseases such as diabetic retinopathy and hypertension in clinical practice. Hence, automatic fundus vessel segmentation is essential and valuable for medical diagnosis in ophthalmopathy and will allow identification and extraction of relevant symmetric and asymmetric patterns. Further, due to the uniqueness of fundus vessel, it can be applied in the field of biometric identification. In this paper, we remold fundus vessel segmentation as a task of pixel-wise classification task, and propose a novel coarse-to-fine fully convolutional neural network (CF-FCN) to extract vessels from fundus images. Our CF-FCN is aimed at making full use of the original data information and making up for the coarse output of the neural network by harnessing the space relationship between pixels in fundus images. Accompanying with necessary pre-processing and post-processing operations, the efficacy and efficiency of our CF-FCN is corroborated through our experiments on DRIVE, STARE, HRF and CHASE DB1 datasets. It achieves sensitivity of 0.7941, specificity of 0.9870, accuracy of 0.9634 and Area Under Receiver Operating Characteristic Curve (AUC) of 0.9787 on DRIVE datasets, which surpasses the state-of-the-art approaches.
Collapse
|
283
|
Das DK, Dutta PK. Efficient automated detection of mitotic cells from breast histological images using deep convolution neutral network with wavelet decomposed patches. Comput Biol Med 2018; 104:29-42. [PMID: 30439598 DOI: 10.1016/j.compbiomed.2018.11.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Revised: 11/01/2018] [Accepted: 11/01/2018] [Indexed: 12/27/2022]
Abstract
In medical practice, the mitotic cell count from histological images acts as a proliferative marker for cancer diagnosis. Therefore, an accurate method for detecting mitotic cells in histological images is essential for cancer screening. Manual evaluation of clinically relevant image features that might reflect mitotic cells in histological images is time-consuming and error prone, due to the heterogeneous physical characteristics of mitotic cells. Computer-assisted automated detection of mitotic cells could overcome these limitations of manual analysis and act as a useful tool for pathologists to make cancer diagnoses efficiently and accurately. Here, we propose a new approach for mitotic cell detection in breast histological images that uses a deep convolution neural network (CNN) with wavelet decomposed image patches. In this approach, raw image patches of 81 × 81 pixels are decomposed to patches of 21 × 21 pixels using Haar wavelet and subsequently used in developing a deep CNN model for automated detection of mitotic cells. The decomposition step reduces convolution time for mitotic cell detection relative to the use of raw image patches in conventional CNN models. The proposed deep network was tested using the MITOS (ICPR2012) and MITOS-ATYPIA-14 breast cancer histological datasets and shown to outperform existing algorithms for mitotic cell detection. Overall, our method improves the performance and reduces the computational burden of conventional deep CNN approaches for mitotic cell detection.
Collapse
Affiliation(s)
- Dev Kumar Das
- School of Medical Science and Technology, IIT, Kharagpur, 721302, India.
| | | |
Collapse
|
284
|
Heisler M, Ju MJ, Bhalla M, Schuck N, Athwal A, Navajas EV, Beg MF, Sarunic MV. Automated identification of cone photoreceptors in adaptive optics optical coherence tomography images using transfer learning. BIOMEDICAL OPTICS EXPRESS 2018; 9:5353-5367. [PMID: 30460133 PMCID: PMC6238943 DOI: 10.1364/boe.9.005353] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 10/01/2018] [Accepted: 10/02/2018] [Indexed: 05/11/2023]
Abstract
Automated measurements of the human cone mosaic requires the identification of individual cone photoreceptors. The current gold standard, manual labeling, is a tedious process and can not be done in a clinically useful timeframe. As such, we present an automated algorithm for identifying cone photoreceptors in adaptive optics optical coherence tomography (AO-OCT) images. Our approach fine-tunes a pre-trained convolutional neural network originally trained on AO scanning laser ophthalmoscope (AO-SLO) images, to work on previously unseen data from a different imaging modality. On average, the automated method correctly identified 94% of manually labeled cones when compared to manual raters, from twenty different AO-OCT images acquired from five normal subjects. Voronoi analysis confirmed the general hexagonal-packing structure of the cone mosaic as well as the general cone density variability across portions of the retina. The consistency of our measurements demonstrates the high reliability and practical utility of having an automated solution to this problem.
Collapse
Affiliation(s)
- Morgan Heisler
- Simon Fraser University, Department of Engineering Science, 8888 University Drive, Burnaby, BC, V5A 1S6,
Canada
| | - Myeong Jin Ju
- Simon Fraser University, Department of Engineering Science, 8888 University Drive, Burnaby, BC, V5A 1S6,
Canada
| | - Mahadev Bhalla
- University of British Columbia, Faculty of Medicine, 317 - 2194 Health Sciences Mall, Vancouver, BC, V6T 1Z3,
Canada
| | - Nathan Schuck
- University of British Columbia, Faculty of Medicine, 317 - 2194 Health Sciences Mall, Vancouver, BC, V6T 1Z3,
Canada
| | - Arman Athwal
- Simon Fraser University, Department of Engineering Science, 8888 University Drive, Burnaby, BC, V5A 1S6,
Canada
| | - Eduardo V. Navajas
- University of British Columbia, Department of Ophthalmology & Vision Science, 2550 Willow Street, Vancouver, BC, V5Z 3N9,
Canada
| | - Mirza Faisal Beg
- Simon Fraser University, Department of Engineering Science, 8888 University Drive, Burnaby, BC, V5A 1S6,
Canada
| | - Marinko V. Sarunic
- Simon Fraser University, Department of Engineering Science, 8888 University Drive, Burnaby, BC, V5A 1S6,
Canada
| |
Collapse
|
285
|
Panda R, Puhan NB, Rao A, Mandal B, Padhy D, Panda G. Deep convolutional neural network-based patch classification for retinal nerve fiber layer defect detection in early glaucoma. J Med Imaging (Bellingham) 2018; 5:044003. [PMID: 30840736 DOI: 10.1117/1.jmi.5.4.044003] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Accepted: 10/02/2018] [Indexed: 11/14/2022] Open
Abstract
Glaucoma is a progressive optic neuropathy characterized by peripheral visual field loss, which is caused by degeneration of retinal nerve fibers. The peripheral vision loss due to glaucoma is asymptomatic. If not detected and treated at an early stage, it leads to complete blindness, which is irreversible in nature. The retinal nerve fiber layer defect (RNFLD) provides an earliest objective evidence of glaucoma. In this regard, we explore cost-effective redfree fundus imaging for RNFLD detection to be practically useful for computer-assisted early glaucoma risk assessment. RNFLD appears as a wedge shaped arcuate structure radiating from the optic disc. The very low contrast between RNFLD and background makes its visual detection quite challenging even by medical experts. In our study, we formulate a deep convolutional neural network (CNN) based patch classification strategy for RNFLD boundary localization. A large number of RNFLD and background image patches train the deep CNN model, which extracts sufficient discriminative information from the patches and results in accurate RNFLD boundary pixel classification. The proposed approach is found to achieve enhanced RNFLD detection performance with sensitivity of 0.8205 and false positive per image of 0.2000 on a newly created early glaucomatic fundus image database.
Collapse
Affiliation(s)
- Rashmi Panda
- IIT Bhubaneswar, School of Electrical Sciences, Bhubaneswar, India
| | - Niladri B Puhan
- IIT Bhubaneswar, School of Electrical Sciences, Bhubaneswar, India
| | - Aparna Rao
- L. V. Prasad Eye Institute, Glaucoma Diagnostic Services, Bhubaneswar, India
| | - Bappaditya Mandal
- Keele University, School of Computing and Mathematics, Faculty of Natural Sciences, Staffordshire, United Kingdom
| | - Debananda Padhy
- L. V. Prasad Eye Institute, Glaucoma Diagnostic Services, Bhubaneswar, India
| | - Ganapati Panda
- IIT Bhubaneswar, School of Electrical Sciences, Bhubaneswar, India
| |
Collapse
|
286
|
Khan KB, Khaliq AA, Jalil A, Iftikhar MA, Ullah N, Aziz MW, Ullah K, Shahid M. A review of retinal blood vessels extraction techniques: challenges, taxonomy, and future trends. Pattern Anal Appl 2018. [DOI: 10.1007/s10044-018-0754-8] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
287
|
Retinal vessel segmentation of color fundus images using multiscale convolutional neural network with an improved cross-entropy loss function. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.05.011] [Citation(s) in RCA: 176] [Impact Index Per Article: 25.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
288
|
Moccia S, Foti S, Routray A, Prudente F, Perin A, Sekula RF, Mattos LS, Balzer JR, Fellows-Mayle W, De Momi E, Riviere CN. Toward Improving Safety in Neurosurgery with an Active Handheld Instrument. Ann Biomed Eng 2018; 46:1450-1464. [PMID: 30014286 PMCID: PMC6150797 DOI: 10.1007/s10439-018-2091-x] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Accepted: 07/04/2018] [Indexed: 10/28/2022]
Abstract
Microsurgical procedures, such as petroclival meningioma resection, require careful surgical actions in order to remove tumor tissue, while avoiding brain and vessel damaging. Such procedures are currently performed under microscope magnification. Robotic tools are emerging in order to filter surgeons' unintended movements and prevent tools from entering forbidden regions such as vascular structures. The present work investigates the use of a handheld robotic tool (Micron) to automate vessel avoidance in microsurgery. In particular, we focused on vessel segmentation, implementing a deep-learning-based segmentation strategy in microscopy images, and its integration with a feature-based passive 3D reconstruction algorithm to obtain accurate and robust vessel position. We then implemented a virtual-fixture-based strategy to control the handheld robotic tool and perform vessel avoidance. Clay vascular phantoms, lying on a background obtained from microscopy images recorded during petroclival meningioma surgery, were used for testing the segmentation and control algorithms. When testing the segmentation algorithm on 100 different phantom images, a median Dice similarity coefficient equal to 0.96 was achieved. A set of 25 Micron trials of 80 s in duration, each involving the interaction of Micron with a different vascular phantom, were recorded, with a safety distance equal to 2 mm, which was comparable to the median vessel diameter. Micron's tip entered the forbidden region 24% of the time when the control algorithm was active. However, the median penetration depth was 16.9 μm, which was two orders of magnitude lower than median vessel diameter. Results suggest the system can assist surgeons in performing safe vessel avoidance during neurosurgical procedures.
Collapse
Affiliation(s)
- Sara Moccia
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Simone Foti
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Arpita Routray
- Robotics Institute, Carnegie Mellon University, Pittsburgh, USA
| | - Francesca Prudente
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Alessandro Perin
- Besta NeuroSim Center, IRCCS Istituto Neurologico C. Besta, Milan, Italy
| | - Raymond F Sekula
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, USA
| | - Leonardo S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Jeffrey R Balzer
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, USA
| | - Wendy Fellows-Mayle
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, USA
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | | |
Collapse
|
289
|
Thamer Mitib Al Sariera, Rangarajan L. Extraction of Blood Vessels in Retina. JOURNAL OF INFORMATION TECHNOLOGY RESEARCH 2018. [DOI: 10.4018/jitr.2018100108] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This article presents a novel method to extract retinal vascular tree automatically. The proposed method consists of four steps; smoothing image using low pass spatial filter to reduce spurious noise in the image; extracting candidate borders of the vessels based on a local window property; tracking process, starting with a candidate pixel and following in the optimum direction with monitoring the connectivity of the vessel twin border; constructing the whole tree of retinal blood vessels by connecting the vessel segments based on their spatial locations, widths and directions. The algorithm was trained with 20 images from the DRIVE dataset, and tested using the remaining 20 images.
Collapse
|
290
|
Yan Z, Yang X, Cheng KT. A Three-Stage Deep Learning Model for Accurate Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2018; 23:1427-1436. [PMID: 30281503 DOI: 10.1109/jbhi.2018.2872813] [Citation(s) in RCA: 127] [Impact Index Per Article: 18.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Automatic retinal vessel segmentation is a fundamental step in the diagnosis of eye-related diseases, in which both thick vessels and thin vessels are important features for symptom detection. All existing deep learning models attempt to segment both types of vessels simultaneously by using a unified pixel-wise loss that treats all vessel pixels with equal importance. Due to the highly imbalanced ratio between thick vessels and thin vessels (namely the majority of vessel pixels belong to thick vessels), the pixel-wise loss would be dominantly guided by thick vessels and relatively little influence comes from thin vessels, often leading to low segmentation accuracy for thin vessels. To address the imbalance problem, in this paper, we explore to segment thick vessels and thin vessels separately by proposing a three-stage deep learning model. The vessel segmentation task is divided into three stages, namely thick vessel segmentation, thin vessel segmentation, and vessel fusion. As better discriminative features could be learned for separate segmentation of thick vessels and thin vessels, this process minimizes the negative influence caused by their highly imbalanced ratio. The final vessel fusion stage refines the results by further identifying nonvessel pixels and improving the overall vessel thickness consistency. The experiments on public datasets DRIVE, STARE, and CHASE_DB1 clearly demonstrate that the proposed three-stage deep learning model outperforms the current state-of-the-art vessel segmentation methods.
Collapse
|
291
|
Jiang Z, Zhang H, Wang Y, Ko SB. Retinal blood vessel segmentation using fully convolutional network with transfer learning. Comput Med Imaging Graph 2018; 68:1-15. [DOI: 10.1016/j.compmedimag.2018.04.005] [Citation(s) in RCA: 103] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2017] [Revised: 04/10/2018] [Accepted: 04/13/2018] [Indexed: 11/25/2022]
|
292
|
Jin M, Hao D, Ding S, Qin B. Low-rank and sparse decomposition with spatially adaptive filtering for sequential segmentation of 2D+t vessels. ACTA ACUST UNITED AC 2018; 63:17LT01. [DOI: 10.1088/1361-6560/aad8e0] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Mingxin Jin
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, People's Republic of China
| | | | | | | |
Collapse
|
293
|
An Intelligent Model for Blood Vessel Segmentation in Diagnosing DR Using CNN. J Med Syst 2018; 42:175. [DOI: 10.1007/s10916-018-1030-6] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2018] [Accepted: 08/03/2018] [Indexed: 10/28/2022]
|
294
|
Cunefare D, Langlo CS, Patterson EJ, Blau S, Dubra A, Carroll J, Farsiu S. Deep learning based detection of cone photoreceptors with multimodal adaptive optics scanning light ophthalmoscope images of achromatopsia. BIOMEDICAL OPTICS EXPRESS 2018; 9:3740-3756. [PMID: 30338152 PMCID: PMC6191607 DOI: 10.1364/boe.9.003740] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2018] [Revised: 07/15/2018] [Accepted: 07/15/2018] [Indexed: 05/18/2023]
Abstract
Fast and reliable quantification of cone photoreceptors is a bottleneck in the clinical utilization of adaptive optics scanning light ophthalmoscope (AOSLO) systems for the study, diagnosis, and prognosis of retinal diseases. To-date, manual grading has been the sole reliable source of AOSLO quantification, as no automatic method has been reliably utilized for cone detection in real-world low-quality images of diseased retina. We present a novel deep learning based approach that combines information from both the confocal and non-confocal split detector AOSLO modalities to detect cones in subjects with achromatopsia. Our dual-mode deep learning based approach outperforms the state-of-the-art automated techniques and is on a par with human grading.
Collapse
Affiliation(s)
- David Cunefare
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Christopher S. Langlo
- Department of Cell Biology, Neurobiology, and Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Emily J. Patterson
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Sarah Blau
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
| | - Joseph Carroll
- Department of Cell Biology, Neurobiology, and Anatomy, Medical College of Wisconsin, Milwaukee, WI 53226, USA
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI 53226, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| |
Collapse
|
295
|
Lam C, Yu C, Huang L, Rubin D. Retinal Lesion Detection With Deep Learning Using Image Patches. Invest Ophthalmol Vis Sci 2018; 59:590-596. [PMID: 29372258 PMCID: PMC5788045 DOI: 10.1167/iovs.17-22721] [Citation(s) in RCA: 71] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
Purpose To develop an automated method of localizing and discerning multiple types of findings in retinal images using a limited set of training data without hard-coded feature extraction as a step toward generalizing these methods to rare disease detection in which a limited number of training data are available. Methods Two ophthalmologists verified 243 retinal images, labeling important subsections of the image to generate 1324 image patches containing either hemorrhages, microaneurysms, exudates, retinal neovascularization, or normal-appearing structures from the Kaggle dataset. These image patches were used to train one standard convolutional neural network to predict the presence of these five classes. A sliding window method was used to generate probability maps across the entire image. Results The method was validated on the eOphta dataset of 148 whole retinal images for microaneurysms and 47 for exudates. A pixel-wise classification of the area under the curve of the receiver operating characteristic of 0.94 and 0.95, as well as a lesion-wise area under the precision recall curve of 0.86 and 0.64, was achieved for microaneurysms and exudates, respectively. Conclusions Regionally trained convolutional neural networks can generate lesion-specific probability maps able to detect and distinguish between subtle pathologic lesions with only a few hundred training examples per lesion.
Collapse
Affiliation(s)
- Carson Lam
- Department of Biomedical Data Science, Stanford University, Stanford, California, United States.,Department of Ophthalmology, Santa Clara Valley Medical Center, San Jose, California, United States
| | - Caroline Yu
- Stanford University School of Medicine, Stanford, California, United States
| | - Laura Huang
- Department of Ophthalmology, Stanford University School of Medicine, Stanford, California, United States
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, California, United States.,Department of Ophthalmology, Stanford University School of Medicine, Stanford, California, United States.,Department of Radiology, Stanford University School of Medicine, Stanford, California, United States
| |
Collapse
|
296
|
Ma Z, Wu X, Song Q, Luo Y, Wang Y, Zhou J. Automated nasopharyngeal carcinoma segmentation in magnetic resonance images by combination of convolutional neural networks and graph cut. Exp Ther Med 2018; 16:2511-2521. [PMID: 30210602 PMCID: PMC6122541 DOI: 10.3892/etm.2018.6478] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2017] [Accepted: 06/22/2018] [Indexed: 02/05/2023] Open
Abstract
Accurate and reliable segmentation of nasopharyngeal carcinoma (NPC) in medical images is an import task for clinical applications, including radiotherapy. However, NPC features large variations in lesion size and shape, as well as inhomogeneous intensities within the tumor and similar intensity to that of nearby tissues, making its segmentation a challenging task. The present study proposes a novel automated NPC segmentation method in magnetic resonance (MR) images by combining a deep convolutional neural network (CNN) model and a 3-dimensional (3D) graph cut-based method in a two-stage manner. First, a multi-view deep CNN-based segmentation method is performed. A voxel-wise initial segmentation is generated by integrating the inferential classification information of three trained single-view CNNs. Instead of directly using the CNN classification results to achieve a final segmentation, the proposed method uses a 3D graph cut-based method to refine the initial segmentation. Specifically, the probability response map obtained using the multi-view CNN method is utilized to calculate the region cost, which represents the likelihood of a voxel being assigned to the tumor or non-tumor. Structure information in 3D from the original MR images is used to calculate the boundary cost, which measures the difference between the two voxels in the 3D neighborhood. The proposed method was evaluated on T1-weighted images from 30 NPC patients using the leave-one-out method. The experimental results demonstrated that the proposed method is effective and accurate for NPC segmentation.
Collapse
Affiliation(s)
- Zongqing Ma
- College of Computer Science, Sichuan University, Chengdu, Sichuan 610065, P.R. China
| | - Xi Wu
- School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan 610225, P.R. China
| | - Qi Song
- CuraCloud Corp., Seattle, WA 98104, USA
| | - Yong Luo
- Department of Head and Neck and Mammary Oncology, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, P.R. China
| | - Yan Wang
- College of Computer Science, Sichuan University, Chengdu, Sichuan 610065, P.R. China
| | - Jiliu Zhou
- College of Computer Science, Sichuan University, Chengdu, Sichuan 610065, P.R. China.,School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan 610225, P.R. China
| |
Collapse
|
297
|
A Modified Dolph-Chebyshev Type II Function Matched Filter for Retinal Vessels Segmentation. Symmetry (Basel) 2018. [DOI: 10.3390/sym10070257] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
298
|
Srinidhi CL, Aparna P, Rajan J. A visual attention guided unsupervised feature learning for robust vessel delineation in retinal images. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2018.04.016] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
299
|
Fu X, Liu T, Xiong Z, Smaill BH, Stiles MK, Zhao J. Segmentation of histological images and fibrosis identification with a convolutional neural network. Comput Biol Med 2018; 98:147-158. [PMID: 29793096 DOI: 10.1016/j.compbiomed.2018.05.015] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 05/13/2018] [Accepted: 05/14/2018] [Indexed: 11/16/2022]
Abstract
Segmentation of histological images is one of the most crucial tasks for many biomedical analyses involving quantification of certain tissue types, such as fibrosis via Masson's trichrome staining. However, challenges are posed by the high variability and complexity of structural features in such images, in addition to imaging artifacts. Further, the conventional approach of manual thresholding is labor-intensive, and highly sensitive to inter- and intra-image intensity variations. An accurate and robust automated segmentation method is of high interest. We propose and evaluate an elegant convolutional neural network (CNN) designed for segmentation of histological images, particularly those with Masson's trichrome stain. The network comprises 11 successive convolutional - rectified linear unit - batch normalization layers. It outperformed state-of-the-art CNNs on a dataset of cardiac histological images (labeling fibrosis, myocytes, and background) with a Dice similarity coefficient of 0.947. With 100 times fewer (only 300,000) trainable parameters than the state-of-the-art, our CNN is less susceptible to overfitting, and is efficient. Additionally, it retains image resolution from input to output, captures fine-grained details, and can be trained end-to-end smoothly. To the best of our knowledge, this is the first deep CNN tailored to the problem of concern, and may potentially be extended to solve similar segmentation tasks to facilitate investigations into pathology and clinical treatment.
Collapse
Affiliation(s)
- Xiaohang Fu
- Auckland Bioengineering Institute, The University of Auckland, Auckland, 1142, New Zealand.
| | - Tong Liu
- Department of Cardiology, Second Hospital of Tianjin Medical University, and Tianjin Key Laboratory of Ionic-Molecular Function of Cardiovascular Disease, Tianjin Institute of Cardiology, Tianjin, 300201, PR China
| | - Zhaohan Xiong
- Auckland Bioengineering Institute, The University of Auckland, Auckland, 1142, New Zealand
| | - Bruce H Smaill
- Auckland Bioengineering Institute, The University of Auckland, Auckland, 1142, New Zealand
| | | | - Jichao Zhao
- Auckland Bioengineering Institute, The University of Auckland, Auckland, 1142, New Zealand.
| |
Collapse
|
300
|
Huang F, Dashtbozorg B, Tan T, Ter Haar Romeny BM. Retinal artery/vein classification using genetic-search feature selection. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 161:197-207. [PMID: 29852962 DOI: 10.1016/j.cmpb.2018.04.016] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2017] [Revised: 03/09/2018] [Accepted: 04/17/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVES The automatic classification of retinal blood vessels into artery and vein (A/V) is still a challenging task in retinal image analysis. Recent works on A/V classification mainly focus on the graph analysis of the retinal vasculature, which exploits the connectivity of vessels to improve the classification performance. While they have overlooked the importance of pixel-wise classification to the final classification results. This paper shows that a complicated feature set is efficient for vessel centerline pixels classification. METHODS We extract enormous amount of features for vessel centerline pixels, and apply a genetic-search based feature selection technique to obtain the optimal feature subset for A/V classification. RESULTS The proposed method achieves an accuracy of 90.2%, the sensitivity of 89.6%, the specificity of 91.3% on the INSPIRE dataset. It shows that our method, using only the information of centerline pixels, gives a comparable performance as the techniques which use complicated graph analysis. In addition, the results on the images acquired by different fundus cameras show that our framework is capable for discriminating vessels independent of the imaging device characteristics, image resolution and image quality. CONCLUSION The complicated feature set is essential for A/V classification, especially on the individual vessels where graph-based methods receive limitations. And it could provide a higher entry to the graph-analysis to achieve a better A/V labeling.
Collapse
Affiliation(s)
- Fan Huang
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Behdad Dashtbozorg
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Tao Tan
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Mammography, ScreenPoint Medical, Nijmegen, The Netherlands
| | - Bart M Ter Haar Romeny
- Department of Biomedical and Information Engineering, Northeastern University, Shenyang, China; Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.
| |
Collapse
|