301
|
Pal A, Garain U, Chandra A, Chatterjee R, Senapati S. Psoriasis skin biopsy image segmentation using Deep Convolutional Neural Network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 159:59-69. [PMID: 29650319 DOI: 10.1016/j.cmpb.2018.01.027] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2017] [Revised: 12/16/2017] [Accepted: 01/24/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Development of machine assisted tools for automatic analysis of psoriasis skin biopsy image plays an important role in clinical assistance. Development of automatic approach for accurate segmentation of psoriasis skin biopsy image is the initial prerequisite for developing such system. However, the complex cellular structure, presence of imaging artifacts, uneven staining variation make the task challenging. This paper presents a pioneering attempt for automatic segmentation of psoriasis skin biopsy images. METHODS Several deep neural architectures are tried for segmenting psoriasis skin biopsy images. Deep models are used for classifying the super-pixels generated by Simple Linear Iterative Clustering (SLIC) and the segmentation performance of these architectures is compared with the traditional hand-crafted feature based classifiers built on popularly used classifiers like K-Nearest Neighbor (KNN), Support Vector Machine (SVM) and Random Forest (RF). A U-shaped Fully Convolutional Neural Network (FCN) is also used in an end to end learning fashion where input is the original color image and the output is the segmentation class map for the skin layers. RESULTS An annotated real psoriasis skin biopsy image data set of ninety (90) images is developed and used for this research. The segmentation performance is evaluated with two metrics namely, Jaccard's Coefficient (JC) and the Ratio of Correct Pixel Classification (RCPC) accuracy. The experimental results show that the CNN based approaches outperform the traditional hand-crafted feature based classification approaches. CONCLUSIONS The present research shows that practical system can be developed for machine assisted analysis of psoriasis disease.
Collapse
Affiliation(s)
- Anabik Pal
- CVPR Unit, Indian Statistical Institute, Kolkata 700108, India.
| | - Utpal Garain
- CVPR Unit, Indian Statistical Institute, Kolkata 700108, India.
| | - Aditi Chandra
- Human Genetics Unit, Indian Statistical Unit, Kolkata, West Bengal 700108, India.
| | - Raghunath Chatterjee
- Human Genetics Unit, Indian Statistical Unit, Kolkata, West Bengal 700108, India.
| | - Swapan Senapati
- Consultant Dermatologist, Uttarpara, Hooghly, West Bengal 712258, India.
| |
Collapse
|
302
|
Moccia S, De Momi E, El Hadji S, Mattos LS. Blood vessel segmentation algorithms - Review of methods, datasets and evaluation metrics. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 158:71-91. [PMID: 29544791 DOI: 10.1016/j.cmpb.2018.02.001] [Citation(s) in RCA: 244] [Impact Index Per Article: 34.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2017] [Revised: 12/23/2017] [Accepted: 02/02/2018] [Indexed: 05/09/2023]
Abstract
BACKGROUND Blood vessel segmentation is a topic of high interest in medical image analysis since the analysis of vessels is crucial for diagnosis, treatment planning and execution, and evaluation of clinical outcomes in different fields, including laryngology, neurosurgery and ophthalmology. Automatic or semi-automatic vessel segmentation can support clinicians in performing these tasks. Different medical imaging techniques are currently used in clinical practice and an appropriate choice of the segmentation algorithm is mandatory to deal with the adopted imaging technique characteristics (e.g. resolution, noise and vessel contrast). OBJECTIVE This paper aims at reviewing the most recent and innovative blood vessel segmentation algorithms. Among the algorithms and approaches considered, we deeply investigated the most novel blood vessel segmentation including machine learning, deformable model, and tracking-based approaches. METHODS This paper analyzes more than 100 articles focused on blood vessel segmentation methods. For each analyzed approach, summary tables are presented reporting imaging technique used, anatomical region and performance measures employed. Benefits and disadvantages of each method are highlighted. DISCUSSION Despite the constant progress and efforts addressed in the field, several issues still need to be overcome. A relevant limitation consists in the segmentation of pathological vessels. Unfortunately, not consistent research effort has been addressed to this issue yet. Research is needed since some of the main assumptions made for healthy vessels (such as linearity and circular cross-section) do not hold in pathological tissues, which on the other hand require new vessel model formulations. Moreover, image intensity drops, noise and low contrast still represent an important obstacle for the achievement of a high-quality enhancement. This is particularly true for optical imaging, where the image quality is usually lower in terms of noise and contrast with respect to magnetic resonance and computer tomography angiography. CONCLUSION No single segmentation approach is suitable for all the different anatomical region or imaging modalities, thus the primary goal of this review was to provide an up to date source of information about the state of the art of the vessel segmentation algorithms so that the most suitable methods can be chosen according to the specific task.
Collapse
Affiliation(s)
- Sara Moccia
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy; Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy.
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Sara El Hadji
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy.
| | - Leonardo S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
303
|
Xu Z, Gao M, Papadakis GZ, Luna B, Jain S, Mollura DJ, Bagci U. Joint solution for PET image segmentation, denoising, and partial volume correction. Med Image Anal 2018; 46:229-243. [PMID: 29627687 PMCID: PMC6080255 DOI: 10.1016/j.media.2018.03.007] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2017] [Revised: 03/15/2018] [Accepted: 03/17/2018] [Indexed: 10/17/2022]
Abstract
Segmentation, denoising, and partial volume correction (PVC) are three major processes in the quantification of uptake regions in post-reconstruction PET images. These problems are conventionally addressed by independent steps. In this study, we hypothesize that these three processes are dependent; therefore, jointly solving them can provide optimal support for quantification of the PET images. To achieve this, we utilize interactions among these processes when designing solutions for each challenge. We also demonstrate that segmentation can help in denoising and PVC by locally constraining the smoothness and correction criteria. For denoising, we adapt generalized Anscombe transformation to Gaussianize the multiplicative noise followed by a new adaptive smoothing algorithm called regional mean denoising. For PVC, we propose a volume consistency-based iterative voxel-based correction algorithm in which denoised and delineated PET images guide the correction process during each iteration precisely. For PET image segmentation, we use affinity propagation (AP)-based iterative clustering method that helps the integration of PVC and denoising algorithms into the delineation process. Qualitative and quantitative results, obtained from phantoms, clinical, and pre-clinical data, show that the proposed framework provides an improved and joint solution for segmentation, denoising, and partial volume correction.
Collapse
Affiliation(s)
- Ziyue Xu
- Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Science Department, National Institutes of Health (NIH), Bethesda, MD 20892, USA
| | - Mingchen Gao
- Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Science Department, National Institutes of Health (NIH), Bethesda, MD 20892, USA
| | - Georgios Z Papadakis
- Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Science Department, National Institutes of Health (NIH), Bethesda, MD 20892, USA
| | - Brian Luna
- University of California at Irvine, Irvine, CA, USA
| | - Sanjay Jain
- Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Daniel J Mollura
- Center for Infectious Disease Imaging (CIDI), Radiology and Imaging Science Department, National Institutes of Health (NIH), Bethesda, MD 20892, USA
| | - Ulas Bagci
- University of Central Florida, Orlando, FL, USA.
| |
Collapse
|
304
|
He JY, Wu X, Jiang YG, Peng Q, Jain R. Hookworm Detection in Wireless Capsule Endoscopy Images With Deep Learning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:2379-2392. [PMID: 29470172 DOI: 10.1109/tip.2018.2801119] [Citation(s) in RCA: 74] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
As one of the most common human helminths, hookworm is a leading cause of maternal and child morbidity, which seriously threatens human health. Recently, wireless capsule endoscopy (WCE) has been applied to automatic hookworm detection. Unfortunately, it remains a challenging task. In recent years, deep convolutional neural network (CNN) has demonstrated impressive performance in various image and video analysis tasks. In this paper, a novel deep hookworm detection framework is proposed for WCE images, which simultaneously models visual appearances and tubular patterns of hookworms. This is the first deep learning framework specifically designed for hookworm detection in WCE images. Two CNN networks, namely edge extraction network and hookworm classification network, are seamlessly integrated in the proposed framework, which avoid the edge feature caching and speed up the classification. Two edge pooling layers are introduced to integrate the tubular regions induced from edge extraction network and the feature maps from hookworm classification network, leading to enhanced feature maps emphasizing the tubular regions. Experiments have been conducted on one of the largest WCE datasets with WCE images, which demonstrate the effectiveness of the proposed hookworm detection framework. It significantly outperforms the state-of-the-art approaches. The high sensitivity and accuracy of the proposed method in detecting hookworms shows its potential for clinical application.
Collapse
|
305
|
Yan Z, Yang X, Cheng KT. Joint Segment-Level and Pixel-Wise Losses for Deep Learning Based Retinal Vessel Segmentation. IEEE Trans Biomed Eng 2018; 65:1912-1923. [PMID: 29993396 DOI: 10.1109/tbme.2018.2828137] [Citation(s) in RCA: 136] [Impact Index Per Article: 19.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE Deep learning based methods for retinal vessel segmentation are usually trained based on pixel-wise losses, which treat all vessel pixels with equal importance in pixel-to-pixel matching between a predicted probability map and the corresponding manually annotated segmentation. However, due to the highly imbalanced pixel ratio between thick and thin vessels in fundus images, a pixel-wise loss would limit deep learning models to learn features for accurate segmentation of thin vessels, which is an important task for clinical diagnosis of eye-related diseases. METHODS In this paper, we propose a new segment-level loss which emphasizes more on the thickness consistency of thin vessels in the training process. By jointly adopting both the segment-level and the pixel-wise losses, the importance between thick and thin vessels in the loss calculation would be more balanced. As a result, more effective features can be learned for vessel segmentation without increasing the overall model complexity. RESULTS Experimental results on public data sets demonstrate that the model trained by the joint losses outperforms the current state-of-the-art methods in both separate-training and cross-training evaluations. CONCLUSION Compared to the pixel-wise loss, utilizing the proposed joint-loss framework is able to learn more distinguishable features for vessel segmentation. In addition, the segment-level loss can bring consistent performance improvement for both deep and shallow network architectures. SIGNIFICANCE The findings from this study of using joint losses can be applied to other deep learning models for performance improvement without significantly changing the network architectures.
Collapse
|
306
|
Lee H, Troschel FM, Tajmir S, Fuchs G, Mario J, Fintelmann FJ, Do S. Pixel-Level Deep Segmentation: Artificial Intelligence Quantifies Muscle on Computed Tomography for Body Morphometric Analysis. J Digit Imaging 2018; 30:487-498. [PMID: 28653123 PMCID: PMC5537099 DOI: 10.1007/s10278-017-9988-z] [Citation(s) in RCA: 120] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
Pretreatment risk stratification is key for personalized medicine. While many physicians rely on an “eyeball test” to assess whether patients will tolerate major surgery or chemotherapy, “eyeballing” is inherently subjective and difficult to quantify. The concept of morphometric age derived from cross-sectional imaging has been found to correlate well with outcomes such as length of stay, morbidity, and mortality. However, the determination of the morphometric age is time intensive and requires highly trained experts. In this study, we propose a fully automated deep learning system for the segmentation of skeletal muscle cross-sectional area (CSA) on an axial computed tomography image taken at the third lumbar vertebra. We utilized a fully automated deep segmentation model derived from an extended implementation of a fully convolutional network with weight initialization of an ImageNet pre-trained model, followed by post processing to eliminate intramuscular fat for a more accurate analysis. This experiment was conducted by varying window level (WL), window width (WW), and bit resolutions in order to better understand the effects of the parameters on the model performance. Our best model, fine-tuned on 250 training images and ground truth labels, achieves 0.93 ± 0.02 Dice similarity coefficient (DSC) and 3.68 ± 2.29% difference between predicted and ground truth muscle CSA on 150 held-out test cases. Ultimately, the fully automated segmentation system can be embedded into the clinical environment to accelerate the quantification of muscle and expanded to volume analysis of 3D datasets.
Collapse
Affiliation(s)
- Hyunkwang Lee
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Fabian M. Troschel
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Shahein Tajmir
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Georg Fuchs
- Department of Radiology, Charite - Universitaetsmedizin Berlin, Chariteplatz 1, 10117 Berlin, Germany
| | - Julia Mario
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Florian J. Fintelmann
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Synho Do
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| |
Collapse
|
307
|
Wan T, Shang X, Yang W, Chen J, Li D, Qin Z. Automated coronary artery tree segmentation in X-ray angiography using improved Hessian based enhancement and statistical region merging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 157:179-190. [PMID: 29477426 DOI: 10.1016/j.cmpb.2018.01.002] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Revised: 12/02/2017] [Accepted: 01/08/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Coronary artery segmentation is a fundamental step for a computer-aided diagnosis system to be developed to assist cardiothoracic radiologists in detecting coronary artery diseases. Manual delineation of the vasculature becomes tedious or even impossible with a large number of images acquired in the daily life clinic. A new computerized image-based segmentation method is presented for automatically extracting coronary arteries from angiography images. METHODS A combination of a multiscale-based adaptive Hessian-based enhancement method and a statistical region merging technique provides a simple and effective way to improve the complex vessel structures as well as thin vessel delineation which often missed by other segmentation methods. The methodology was validated on 100 patients who underwent diagnostic coronary angiography. The segmentation performance was assessed via both qualitative and quantitative evaluations. RESULTS Quantitative evaluation shows that our method is able to identify coronary artery trees with an accuracy of 93% and outperforms other segmentation methods in terms of two widely used segmentation metrics of mean absolute difference and dice similarity coefficient. CONCLUSIONS The comparison to the manual segmentations from three human observers suggests that the presented automated segmentation method is potential to be used in an image-based computerized analysis system for early detection of coronary artery disease.
Collapse
Affiliation(s)
- Tao Wan
- Medical Image Analysis Lab, School of Biomedical Science and Medical Engineering, Beihang University, Beijing 100191, China.
| | - Xiaoqing Shang
- Medical Image Analysis Lab, School of Biomedical Science and Medical Engineering, Beihang University, Beijing 100191, China
| | - Weilin Yang
- School of Biomedical Science and Medical Engineering, Beihang University, Beijing 100191, China
| | - Jianhui Chen
- No. 91 Central Hospital of PLA, Henan 454003, China
| | - Deyu Li
- School of Biomedical Science and Medical Engineering, Beihang University, Beijing 100191, China
| | - Zengchang Qin
- Intelligent Computing and Machine Learning Lab, School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China.
| |
Collapse
|
308
|
Asem MM, Oveisi IS, Janbozorgi M. Blood vessel segmentation in modern wide-field retinal images in the presence of additive Gaussian noise. J Med Imaging (Bellingham) 2018. [PMID: 29531969 DOI: 10.1117/1.jmi.5.3.031405] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Retinal blood vessels indicate some serious health ramifications, such as cardiovascular disease and stroke. Thanks to modern imaging technology, high-resolution images provide detailed information to help analyze retinal vascular features before symptoms associated with such conditions fully develop. Additionally, these retinal images can be used by ophthalmologists to facilitate diagnosis and the procedures of eye surgery. A fuzzy noise reduction algorithm was employed to enhance color images corrupted by Gaussian noise. The present paper proposes employing a contrast limited adaptive histogram equalization to enhance illumination and increase the contrast of retinal images captured from state-of-the-art cameras. Possessing directional properties, the multistructure elements method can lead to high-performance edge detection. Therefore, multistructure elements-based morphology operators are used to detect high-quality image ridges. Following this detection, the irrelevant ridges, which are not part of the vessel tree, were removed by morphological operators by reconstruction, attempting also to keep the thin vessels preserved. A combined method of connected components analysis (CCA) in conjunction with a thresholding approach was further used to identify the ridges that correspond to vessels. The application of CCA can yield higher efficiency when it is locally applied rather than applied on the whole image. The significance of our work lies in the way in which several methods are effectively combined and the originality of the database employed, making this work unique in the literature. Computer simulation results in wide-field retinal images with up to a 200-deg field of view are a testimony of the efficacy of the proposed approach, with an accuracy of 0.9524.
Collapse
Affiliation(s)
| | - Iman Sheikh Oveisi
- Islamic Azad University, Department of Biomedical Engineering, Science and Research, Tehran, Iran
| | - Mona Janbozorgi
- Washington State University, Department of Medical Sciences, Spokane, Washington, United States
| |
Collapse
|
309
|
Costa P, Galdran A, Meyer MI, Niemeijer M, Abramoff M, Mendonca AM, Campilho A. End-to-End Adversarial Retinal Image Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:781-791. [PMID: 28981409 DOI: 10.1109/tmi.2017.2759102] [Citation(s) in RCA: 155] [Impact Index Per Article: 22.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In medical image analysis applications, the availability of the large amounts of annotated data is becoming increasingly critical. However, annotated medical data is often scarce and costly to obtain. In this paper, we address the problem of synthesizing retinal color images by applying recent techniques based on adversarial learning. In this setting, a generative model is trained to maximize a loss function provided by a second model attempting to classify its output into real or synthetic. In particular, we propose to implement an adversarial autoencoder for the task of retinal vessel network synthesis. We use the generated vessel trees as an intermediate stage for the generation of color retinal images, which is accomplished with a generative adversarial network. Both models require the optimization of almost everywhere differentiable loss functions, which allows us to train them jointly. The resulting model offers an end-to-end retinal image synthesis system capable of generating as many retinal images as the user requires, with their corresponding vessel networks, by sampling from a simple probability distribution that we impose to the associated latent space. We show that the learned latent space contains a well-defined semantic structure, implying that we can perform calculations in the space of retinal images, e.g., smoothly interpolating new data points between two retinal images. Visual and quantitative results demonstrate that the synthesized images are substantially different from those in the training set, while being also anatomically consistent and displaying a reasonable visual quality.
Collapse
|
310
|
Khan KB, Khaliq AA, Jalil A, Shahid M. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising. PLoS One 2018; 13:e0192203. [PMID: 29432464 PMCID: PMC5809116 DOI: 10.1371/journal.pone.0192203] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2016] [Accepted: 01/12/2018] [Indexed: 11/18/2022] Open
Abstract
The exploration of retinal vessel structure is colossally important on account of numerous diseases including stroke, Diabetic Retinopathy (DR) and coronary heart diseases, which can damage the retinal vessel structure. The retinal vascular network is very hard to be extracted due to its spreading and diminishing geometry and contrast variation in an image. The proposed technique consists of unique parallel processes for denoising and extraction of blood vessels in retinal images. In the preprocessing section, an adaptive histogram equalization enhances dissimilarity between the vessels and the background and morphological top-hat filters are employed to eliminate macula and optic disc, etc. To remove local noise, the difference of images is computed from the top-hat filtered image and the high-boost filtered image. Frangi filter is applied at multi scale for the enhancement of vessels possessing diverse widths. Segmentation is performed by using improved Otsu thresholding on the high-boost filtered image and Frangi's enhanced image, separately. In the postprocessing steps, a Vessel Location Map (VLM) is extracted by using raster to vector transformation. Postprocessing steps are employed in a novel way to reject misclassified vessel pixels. The final segmented image is obtained by using pixel-by-pixel AND operation between VLM and Frangi output image. The method has been rigorously analyzed on the STARE, DRIVE and HRF datasets.
Collapse
Affiliation(s)
- Khan Bahadar Khan
- Department of Telecommunication Engineering, The Islamia University Bahawalpur, Pakistan
- Department of Electronic Engineering, International Islamic University, Islamabad, Pakistan
- * E-mail:
| | - Amir. A. Khaliq
- Department of Electronic Engineering, International Islamic University, Islamabad, Pakistan
| | - Abdul Jalil
- Department of Electronic Engineering, International Islamic University, Islamabad, Pakistan
| | - Muhammad Shahid
- Al-Khawarizmi Institute of Computer Science, UET Lahore, Pakistan
| |
Collapse
|
311
|
Abstract
Medical image segmentation is a fundamental and challenging problem for analyzing medical images. Among different existing medical image segmentation methods, graph-based approaches are relatively new and show good features in clinical applications. In the graph-based method, pixels or regions in the original image are interpreted into nodes in a graph. By considering Markov random field to model the contexture information of the image, the medical image segmentation problem can be transformed into a graph-based energy minimization problem. This problem can be solved by the use of minimum s-t cut/ maximum flow algorithm. This review is devoted to cut-based medical segmentation methods, including graph cuts and graph search for region and surface segmentation. Different varieties of cut-based methods, including graph-cuts-based methods, model integrated graph cuts methods, graph-search-based methods, and graph search/graph cuts based methods, are systematically reviewed. Graph cuts and graph search with deep learning technique are also discussed.
Collapse
|
312
|
Retinal Vessels Segmentation Techniques and Algorithms: A Survey. APPLIED SCIENCES-BASEL 2018. [DOI: 10.3390/app8020155] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
313
|
Deep Learning for Medical Image Processing: Overview, Challenges and the Future. LECTURE NOTES IN COMPUTATIONAL VISION AND BIOMECHANICS 2018. [DOI: 10.1007/978-3-319-65981-7_12] [Citation(s) in RCA: 369] [Impact Index Per Article: 52.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
314
|
Zhang Y, Chung ACS. Deep Supervision with Additional Labels for Retinal Vessel Segmentation Task. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00934-2_10] [Citation(s) in RCA: 54] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|
315
|
Wu Y, Xia Y, Song Y, Zhang Y, Cai W. Multiscale Network Followed Network Model for Retinal Vessel Segmentation. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00934-2_14] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|
316
|
Rad RM, Saeedi P, Au J, Havelock J. Human Blastocyst's Zona Pellucida segmentation via boosting ensemble of complementary learning. INFORMATICS IN MEDICINE UNLOCKED 2018. [DOI: 10.1016/j.imu.2018.10.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022] Open
|
317
|
|
318
|
Segmentation of the hippocampus by transferring algorithmic knowledge for large cohort processing. Med Image Anal 2018; 43:214-228. [DOI: 10.1016/j.media.2017.11.004] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2017] [Revised: 09/14/2017] [Accepted: 11/06/2017] [Indexed: 01/27/2023]
|
319
|
Tan JH, Fujita H, Sivaprasad S, Bhandary SV, Rao AK, Chua KC, Acharya UR. Automated segmentation of exudates, haemorrhages, microaneurysms using single convolutional neural network. Inf Sci (N Y) 2017. [DOI: 10.1016/j.ins.2017.08.050] [Citation(s) in RCA: 149] [Impact Index Per Article: 18.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
320
|
Retinal Vessel Segmentation via Structure Tensor Coloring and Anisotropy Enhancement. Symmetry (Basel) 2017. [DOI: 10.3390/sym9110276] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
321
|
Kalaie S, Gooya A. Vascular tree tracking and bifurcation points detection in retinal images using a hierarchical probabilistic model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 151:139-149. [PMID: 28946995 DOI: 10.1016/j.cmpb.2017.08.018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2016] [Revised: 07/27/2017] [Accepted: 08/21/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinal vascular tree extraction plays an important role in computer-aided diagnosis and surgical operations. Junction point detection and classification provide useful information about the structure of the vascular network, facilitating objective analysis of retinal diseases. METHODS In this study, we present a new machine learning algorithm for joint classification and tracking of retinal blood vessels. Our method is based on a hierarchical probabilistic framework, where the local intensity cross sections are classified as either junction or vessel points. Gaussian basis functions are used for intensity interpolation, and the corresponding linear coefficients are assumed to be samples from class-specific Gamma distributions. Hence, a directed Probabilistic Graphical Model (PGM) is proposed and the hyperparameters are estimated using a Maximum Likelihood (ML) solution based on Laplace approximation. RESULTS The performance of proposed method is evaluated using precision and recall rates on the REVIEW database. Our experiments show the proposed approach reaches promising results in bifurcation point detection and classification, achieving 88.67% precision and 88.67% recall rates. CONCLUSIONS This technique results in a classifier with high precision and recall when comparing it with Xu's method.
Collapse
Affiliation(s)
- Soodeh Kalaie
- Department of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran.
| | - Ali Gooya
- Department of Electronic and Electrical Engineering, University of Sheffield, Sheffield, UK
| |
Collapse
|
322
|
A Retinal Vessel Detection Approach Based on Shearlet Transform and Indeterminacy Filtering on Fundus Images. Symmetry (Basel) 2017. [DOI: 10.3390/sym9100235] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
323
|
Khan MAU, Khan TM, Soomro TA, Mir N, Gao J. Boosting sensitivity of a retinal vessel segmentation algorithm. Pattern Anal Appl 2017. [DOI: 10.1007/s10044-017-0661-4] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
324
|
Asl ME, Koohbanani NA, Frangi AF, Gooya A. Tracking and diameter estimation of retinal vessels using Gaussian process and Radon transform. J Med Imaging (Bellingham) 2017; 4:034006. [PMID: 28924571 DOI: 10.1117/1.jmi.4.3.034006] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2017] [Accepted: 08/09/2017] [Indexed: 11/14/2022] Open
Abstract
Extraction of blood vessels in retinal images is an important step for computer-aided diagnosis of ophthalmic pathologies. We propose an approach for blood vessel tracking and diameter estimation. We hypothesize that the curvature and the diameter of blood vessels are Gaussian processes (GPs). Local Radon transform, which is robust against noise, is subsequently used to compute the features and train the GPs. By learning the kernelized covariance matrix from training data, vessel direction and its diameter are estimated. In order to detect bifurcations, multiple GPs are used and the difference between their corresponding predicted directions is quantified. The combination of Radon features and GP results in a good performance in the presence of noise. The proposed method successfully deals with typically difficult cases such as bifurcations and central arterial reflex, and also tracks thin vessels with high accuracy. Experiments are conducted on the publicly available DRIVE, STARE, CHASEDB1, and high-resolution fundus databases evaluating sensitivity, specificity, and Matthew's correlation coefficient (MCC). Experimental results on these datasets show that the proposed method reaches an average sensitivity of 75.67%, specificity of 97.46%, and MCC of 72.18% which is comparable to the state-of-the-art.
Collapse
Affiliation(s)
- Masoud Elhami Asl
- Tarbiat Modares University, Faculty of Electrical and Computer Engineering, Tehran, Iran
| | - Navid Alemi Koohbanani
- Tarbiat Modares University, Faculty of Electrical and Computer Engineering, Tehran, Iran
| | - Alejandro F Frangi
- University of Sheffield, Centre for Computational Imaging and Simulation Technologies in Biomedicine, Department of Electronic and Electrical Engineering, Sheffield, United Kingdom
| | - Ali Gooya
- University of Sheffield, Centre for Computational Imaging and Simulation Technologies in Biomedicine, Department of Electronic and Electrical Engineering, Sheffield, United Kingdom
| |
Collapse
|
325
|
Automatic blood vessels segmentation based on different retinal maps from OCTA scans. Comput Biol Med 2017; 89:150-161. [PMID: 28806613 DOI: 10.1016/j.compbiomed.2017.08.008] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2017] [Revised: 08/03/2017] [Accepted: 08/04/2017] [Indexed: 11/23/2022]
Abstract
The retinal vascular network reflects the health of the retina, which is a useful diagnostic indicator of systemic vascular. Therefore, the segmentation of retinal blood vessels is a powerful method for diagnosing vascular diseases. This paper presents an automatic segmentation system for retinal blood vessels from Optical Coherence Tomography Angiography (OCTA) images. The system segments blood vessels from the superficial and deep retinal maps for normal and diabetic cases. Initially, we reduced the noise and improved the contrast of the OCTA images by using the Generalized Gauss-Markov random field (GGMRF) model. Secondly, we proposed a joint Markov-Gibbs random field (MGRF) model to segment the retinal blood vessels from other background tissues. It integrates both appearance and spatial models in addition to the prior probability model of OCTA images. The higher order MGRF (HO-MGRF) model in addition to the 1st-order intensity model are used to consider the spatial information in order to overcome the low contrast between vessels and other tissues. Finally, we refined the segmentation by extracting connected regions using a 2D connectivity filter. The proposed segmentation system was trained and tested on 47 data sets, which are 23 normal data sets and 24 data sets for diabetic patients. To evaluate the accuracy and robustness of the proposed segmentation framework, we used three different metrics, which are Dice similarity coefficient (DSC), absolute vessels volume difference (VVD), and area under the curve (AUC). The results on OCTA data sets (DSC=95.04±3.75%, VVD=8.51±1.49%, and AUC=95.20±1.52%) show the promise of the proposed segmentation approach.
Collapse
|
326
|
Lee H, Tajmir S, Lee J, Zissen M, Yeshiwas BA, Alkasab TK, Choy G, Do S. Fully Automated Deep Learning System for Bone Age Assessment. J Digit Imaging 2017; 30:427-441. [PMID: 28275919 PMCID: PMC5537090 DOI: 10.1007/s10278-017-9955-8] [Citation(s) in RCA: 198] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
Skeletal maturity progresses through discrete phases, a fact that is used routinely in pediatrics where bone age assessments (BAAs) are compared to chronological age in the evaluation of endocrine and metabolic disorders. While central to many disease evaluations, little has changed to improve the tedious process since its introduction in 1950. In this study, we propose a fully automated deep learning pipeline to segment a region of interest, standardize and preprocess input radiographs, and perform BAA. Our models use an ImageNet pretrained, fine-tuned convolutional neural network (CNN) to achieve 57.32 and 61.40% accuracies for the female and male cohorts on our held-out test images. Female test radiographs were assigned a BAA within 1 year 90.39% and within 2 years 98.11% of the time. Male test radiographs were assigned 94.18% within 1 year and 99.00% within 2 years. Using the input occlusion method, attention maps were created which reveal what features the trained model uses to perform BAA. These correspond to what human experts look at when manually performing BAA. Finally, the fully automated BAA system was deployed in the clinical environment as a decision supporting system for more accurate and efficient BAAs at much faster interpretation time (<2 s) than the conventional method.
Collapse
Affiliation(s)
- Hyunkwang Lee
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Shahein Tajmir
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Jenny Lee
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Maurice Zissen
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Bethel Ayele Yeshiwas
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Tarik K. Alkasab
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Garry Choy
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Synho Do
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| |
Collapse
|
327
|
Cunefare D, Fang L, Cooper RF, Dubra A, Carroll J, Farsiu S. Open source software for automatic detection of cone photoreceptors in adaptive optics ophthalmoscopy using convolutional neural networks. Sci Rep 2017; 7:6620. [PMID: 28747737 PMCID: PMC5529414 DOI: 10.1038/s41598-017-07103-0] [Citation(s) in RCA: 59] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Accepted: 06/21/2017] [Indexed: 01/07/2023] Open
Abstract
Imaging with an adaptive optics scanning light ophthalmoscope (AOSLO) enables direct visualization of the cone photoreceptor mosaic in the living human retina. Quantitative analysis of AOSLO images typically requires manual grading, which is time consuming, and subjective; thus, automated algorithms are highly desirable. Previously developed automated methods are often reliant on ad hoc rules that may not be transferable between different imaging modalities or retinal locations. In this work, we present a convolutional neural network (CNN) based method for cone detection that learns features of interest directly from training data. This cone-identifying algorithm was trained and validated on separate data sets of confocal and split detector AOSLO images with results showing performance that closely mimics the gold standard manual process. Further, without any need for algorithmic modifications for a specific AOSLO imaging system, our fully-automated multi-modality CNN-based cone detection method resulted in comparable results to previous automatic cone segmentation methods which utilized ad hoc rules for different applications. We have made free open-source software for the proposed method and the corresponding training and testing datasets available online.
Collapse
Affiliation(s)
- David Cunefare
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA.
| | - Leyuan Fang
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA
| | - Robert F Cooper
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA, 19104, USA.,Department of Psychology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, CA, 94303, USA
| | - Joseph Carroll
- Department of Biomedical Engineering, Marquette University, Milwaukee, WI, 53233, USA.,Department of Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA.,Department of Ophthalmology, Duke University Medical Center, Durham, NC, 27710, USA
| |
Collapse
|
328
|
Torabi A, Zareayan Jahromy F, Daliri MR. Semantic Category-Based Classification Using Nonlinear Features and Wavelet Coefficients of Brain Signals. Cognit Comput 2017. [DOI: 10.1007/s12559-017-9487-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
329
|
Soomro TA, Gao J, Khan T, Hani AFM, Khan MAU, Paul M. Computerised approaches for the detection of diabetic retinopathy using retinal fundus images: a survey. Pattern Anal Appl 2017. [DOI: 10.1007/s10044-017-0630-y] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
330
|
Mo J, Zhang L. Multi-level deep supervised networks for retinal vessel segmentation. Int J Comput Assist Radiol Surg 2017; 12:2181-2193. [PMID: 28577175 DOI: 10.1007/s11548-017-1619-0] [Citation(s) in RCA: 86] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2016] [Accepted: 05/22/2017] [Indexed: 10/19/2022]
Abstract
PURPOSE Changes in the appearance of retinal blood vessels are an important indicator for various ophthalmologic and cardiovascular diseases, including diabetes, hypertension, arteriosclerosis, and choroidal neovascularization. Vessel segmentation from retinal images is very challenging because of low blood vessel contrast, intricate vessel topology, and the presence of pathologies such as microaneurysms and hemorrhages. To overcome these challenges, we propose a neural network-based method for vessel segmentation. METHODS A deep supervised fully convolutional network is developed by leveraging multi-level hierarchical features of the deep networks. To improve the discriminative capability of features in lower layers of the deep network and guide the gradient back propagation to overcome gradient vanishing, deep supervision with auxiliary classifiers is incorporated in some intermediate layers of the network. Moreover, the transferred knowledge learned from other domains is used to alleviate the issue of insufficient medical training data. The proposed approach does not rely on hand-crafted features and needs no problem-specific preprocessing or postprocessing, which reduces the impact of subjective factors. RESULTS We evaluate the proposed method on three publicly available databases, the DRIVE, STARE, and CHASE_DB1 databases. Extensive experiments demonstrate that our approach achieves better or comparable performance to state-of-the-art methods with a much faster processing speed, making it suitable for real-world clinical applications. The results of cross-training experiments demonstrate its robustness with respect to the training set. CONCLUSIONS The proposed approach segments retinal vessels accurately with a much faster processing speed and can be easily applied to other biomedical segmentation tasks.
Collapse
Affiliation(s)
- Juan Mo
- College of Computer Science, Sichuan University, Chengdu, 610065, China.,School of Science, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Lei Zhang
- College of Computer Science, Sichuan University, Chengdu, 610065, China.
| |
Collapse
|
331
|
Zhang L, Nogues I, Summers RM, Liu S, Yao J. DeepPap: Deep Convolutional Networks for Cervical Cell Classification. IEEE J Biomed Health Inform 2017; 21:1633-1643. [PMID: 28541229 DOI: 10.1109/jbhi.2017.2705583] [Citation(s) in RCA: 158] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Automation-assisted cervical screening via Pap smear or liquid-based cytology (LBC) is a highly effective cell imaging based cancer detection tool, where cells are partitioned into "abnormal" and "normal" categories. However, the success of most traditional classification methods relies on the presence of accurate cell segmentations. Despite sixty years of research in this field, accurate segmentation remains a challenge in the presence of cell clusters and pathologies. Moreover, previous classification methods are only built upon the extraction of hand-crafted features, such as morphology and texture. This paper addresses these limitations by proposing a method to directly classify cervical cells-without prior segmentation-based on deep features, using convolutional neural networks (ConvNets). First, the ConvNet is pretrained on a natural image dataset. It is subsequently fine-tuned on a cervical cell dataset consisting of adaptively resampled image patches coarsely centered on the nuclei. In the testing phase, aggregation is used to average the prediction scores of a similar set of image patches. The proposed method is evaluated on both Pap smear and LBC datasets. Results show that our method outperforms previous algorithms in classification accuracy (98.3%), area under the curve (0.99) values, and especially specificity (98.3%), when applied to the Herlev benchmark Pap smear dataset and evaluated using five-fold cross validation. Similar superior performances are also achieved on the HEMLBC (H&E stained manual LBC) dataset. Our method is promising for the development of automation-assisted reading systems in primary cervical screening.
Collapse
|
332
|
Fang L, Cunefare D, Wang C, Guymer RH, Li S, Farsiu S. Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search. BIOMEDICAL OPTICS EXPRESS 2017; 8:2732-2744. [PMID: 28663902 PMCID: PMC5480509 DOI: 10.1364/boe.8.002732] [Citation(s) in RCA: 271] [Impact Index Per Article: 33.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Revised: 04/22/2017] [Accepted: 04/23/2017] [Indexed: 05/18/2023]
Abstract
We present a novel framework combining convolutional neural networks (CNN) and graph search methods (termed as CNN-GS) for the automatic segmentation of nine layer boundaries on retinal optical coherence tomography (OCT) images. CNN-GS first utilizes a CNN to extract features of specific retinal layer boundaries and train a corresponding classifier to delineate a pilot estimate of the eight layers. Next, a graph search method uses the probability maps created from the CNN to find the final boundaries. We validated our proposed method on 60 volumes (2915 B-scans) from 20 human eyes with non-exudative age-related macular degeneration (AMD), which attested to effectiveness of our proposed technique.
Collapse
Affiliation(s)
- Leyuan Fang
- Departments of Biomedical Engineering Duke University, Durham, NC 27708, USA
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
| | - David Cunefare
- Departments of Biomedical Engineering Duke University, Durham, NC 27708, USA
| | - Chong Wang
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
| | - Robyn H. Guymer
- Centre for Eye Research Australia University of Melbourne, Department of Surgery, Royal Victorian Eye and Ear Hospital, Victoria 3002, Australia
| | - Shutao Li
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
| | - Sina Farsiu
- Departments of Biomedical Engineering Duke University, Durham, NC 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710, USA
| |
Collapse
|
333
|
Jordan KC, Menolotto M, Bolster NM, Livingstone IAT, Giardini ME. A review of feature-based retinal image analysis. EXPERT REVIEW OF OPHTHALMOLOGY 2017. [DOI: 10.1080/17469899.2017.1307105] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
334
|
L Srinidhi C, Aparna P, Rajan J. Recent Advancements in Retinal Vessel Segmentation. J Med Syst 2017; 41:70. [DOI: 10.1007/s10916-017-0719-2] [Citation(s) in RCA: 67] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2017] [Accepted: 03/01/2017] [Indexed: 11/28/2022]
|
335
|
Gu L, Zhang X, Zhao H, Li H, Cheng L. Segment 2D and 3D Filaments by Learning Structured and Contextual Features. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:596-606. [PMID: 27831862 DOI: 10.1109/tmi.2016.2623357] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We focus on the challenging problem of filamentary structure segmentation in both 2D and 3D images, including retinal vessels and neurons, among others. Despite the increasing amount of efforts in learning based methods to tackle this problem, there still lack proper data-driven feature construction mechanisms to sufficiently encode contextual labelling information, which might hinder the segmentation performance. This observation prompts us to propose a data-driven approach to learn structured and contextual features in this paper. The structured features aim to integrate local spatial label patterns into the feature space, thus endowing the follow-up tree classifiers capability to grouping training examples with similar structure into the same leaf node when splitting the feature space, and further yielding contextual features to capture more of the global contextual information. Empirical evaluations demonstrate that our approach outperforms state-of-the-arts on well-regarded testbeds over a variety of applications. Our code is also made publicly available in support of the open-source research activities.
Collapse
|
336
|
Meyer MI, Costa P, Galdran A, Mendonça AM, Campilho A. A Deep Neural Network for Vessel Segmentation of Scanning Laser Ophthalmoscopy Images. LECTURE NOTES IN COMPUTER SCIENCE 2017. [DOI: 10.1007/978-3-319-59876-5_56] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
337
|
|