51
|
Shang Q, Zhao Y, Chen Z, Hao H, Li F, Zhang X, Liu J. Automated Iris Segmentation from Anterior Segment OCT Images with Occludable Angles via Local Phase Tensor. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:4745-4749. [PMID: 31946922 DOI: 10.1109/embc.2019.8857336] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Morphological changes in the iris are one of the major causes of angle-closure glaucoma, and an anteriorly-bowed iris may be further associated with greater risk of disease progression from primary angle-closure suspect (PACS) to chronic primary angle-closure glaucoma (CPCAG). In consequence, the automated detection of abnormalities in the iris region is of great importance in the management of glaucoma. In this paper, we present a new method for the extraction of the iris region by using a local phase tensor-based curvilinear structure enhancement method, and apply it to anterior segment optical coherence tomography (AS-OCT) imagery in the presence of occludable iridocorneal angle. The proposed method is evaluated across a dataset of 200 anterior chamber angle (ACA) images, and the experimental results show that the proposed method outperforms existing state-of-the-art method in applicability, effectiveness, and accuracy.
Collapse
|
52
|
Zhao R, Zhao Y, Chen Z, Zhao Y, Yang J, Hu Y, Cheng J, Liu J. Speckle Reduction in Optical Coherence Tomography via Super-Resolution Reconstruction. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:5589-5592. [PMID: 31947122 DOI: 10.1109/embc.2019.8856445] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Reducing speckle noise from the optical coherence tomograms (OCT) of human retina is a fundamental step to a better visualization and analysis in retinal imaging, as thus to support examination, diagnosis and treatment of many eye diseases. In this study, we propose a new method for speckle reduction in OCT images using the super-resolution technology. It merges multiple images for the same scene but with sub-pixel movements and restores the missing signals in one pixel, which significantly improves the image quality. The proposed method is evaluated on a dataset of 20 OCT volumes (5120 images), through the mean square error, peak signal to noise ratio and the mean structure similarity index using high quality line-scan images as reference. The experimental results show that the proposed method outperforms existing state-of-the-art approaches in applicability, effectiveness, and accuracy.
Collapse
|
53
|
Mei K, Hu B, Fei B, Qin B. Phase asymmetry ultrasound despeckling with fractional anisotropic diffusion and total variation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:10.1109/TIP.2019.2953361. [PMID: 31751240 PMCID: PMC7370834 DOI: 10.1109/tip.2019.2953361] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
We propose an ultrasound speckle filtering method for not only preserving various edge features but also filtering tissue-dependent complex speckle noises in ultrasound images. The key idea is to detect these various edges using a phase congruence-based edge significance measure called phase asymmetry (PAS), which is invariant to the intensity amplitude of edges and takes 0 in non-edge smooth regions and 1 at the idea step edge, while also taking intermediate values at slowly varying ramp edges. By leveraging the PAS metric in designing weighting coefficients to maintain a balance between fractional-order anisotropic diffusion and total variation (TV) filters in TV cost function, we propose a new fractional TV framework to not only achieve the best despeckling performance with ramp edge preservation but also reduce the staircase effect produced by integral-order filters. Then, we exploit the PAS metric in designing a new fractional-order diffusion coefficient to properly preserve low-contrast edges in diffusion filtering. Finally, different from fixed fractional-order diffusion filters, an adaptive fractional order is introduced based on the PAS metric to enhance various weak edges in the spatially transitional areas between objects. The proposed fractional TV model is minimized using the gradient descent method to obtain the final denoised image. The experimental results and real application of ultrasound breast image segmentation show that the proposed method outperforms other state-of-the-art ultrasound despeckling filters for both speckle reduction and feature preservation in terms of visual evaluation and quantitative indices. The best scores on feature similarity indices have achieved 0.867, 0.844 and 0.834 under three different levels of noise, while the best breast ultrasound segmentation accuracy in terms of the mean and median dice similarity coefficient are 96.25% and 96.15%, respectively.
Collapse
Affiliation(s)
- Kunqiang Mei
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Bin Hu
- Department of Ultrasound in Medicine, Shanghai Jiao Tong University Affiliated Sixth People’s Hospital, Shanghai Institute of Ultrasound in Medicine, Shanghai 200233, China
| | - Baowei Fei
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, TX 75080 USA
| | - Binjie Qin
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
54
|
Song S, Frangi AF, Yang J, Ai D, Du C, Huang Y, Song H, Zhang L, Han Y, Wang Y. Patch-Based Adaptive Background Subtraction for Vascular Enhancement in X-Ray Cineangiograms. IEEE J Biomed Health Inform 2019; 23:2563-2575. [DOI: 10.1109/jbhi.2019.2892072] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
55
|
Cherukuri V, G VKB, Bala R, Monga V. Deep Retinal Image Segmentation with Regularization Under Geometric Priors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2552-2567. [PMID: 31613766 DOI: 10.1109/tip.2019.2946078] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Vessel segmentation of retinal images is a key diagnostic capability in ophthalmology. This problem faces several challenges including low contrast, variable vessel size and thickness, and presence of interfering pathology such as micro-aneurysms and hemorrhages. Early approaches addressing this problem employed hand-crafted filters to capture vessel structures, accompanied by morphological post-processing. More recently, deep learning techniques have been employed with significantly enhanced segmentation accuracy. We propose a novel domain enriched deep network that consists of two components: 1) a representation network that learns geometric features specific to retinal images, and 2) a custom designed computationally efficient residual task network that utilizes the features obtained from the representation layer to perform pixel-level segmentation. The representation and task networks are jointly learned for any given training set. To obtain physically meaningful and practically effective representation filters, we propose two new constraints that are inspired by expected prior structure on these filters: 1) orientation constraint that promotes geometric diversity of curvilinear features, and 2) a data adaptive noise regularizer that penalizes false positives. Multi-scale extensions are developed to enable accurate detection of thin vessels. Experiments performed on three challenging benchmark databases under a variety of training scenarios show that the proposed prior guided deep network outperforms state of the art alternatives as measured by common evaluation metrics, while being more economical in network size and inference time.
Collapse
|
56
|
Alom MZ, Yakopcic C, Hasan M, Taha TM, Asari VK. Recurrent residual U-Net for medical image segmentation. J Med Imaging (Bellingham) 2019; 6:014006. [PMID: 30944843 PMCID: PMC6435980 DOI: 10.1117/1.jmi.6.1.014006] [Citation(s) in RCA: 230] [Impact Index Per Article: 38.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Accepted: 03/05/2019] [Indexed: 12/12/2022] Open
Abstract
Deep learning (DL)-based semantic segmentation methods have been providing state-of-the-art performance in the past few years. More specifically, these techniques have been successfully applied in medical image classification, segmentation, and detection tasks. One DL technique, U-Net, has become one of the most popular for these applications. We propose a recurrent U-Net model and a recurrent residual U-Net model, which are named RU-Net and R2U-Net, respectively. The proposed models utilize the power of U-Net, residual networks, and recurrent convolutional neural networks. There are several advantages to using these proposed architectures for segmentation tasks. First, a residual unit helps when training deep architectures. Second, feature accumulation with recurrent residual convolutional layers ensures better feature representation for segmentation tasks. Third, it allows us to design better U-Net architectures with the same number of network parameters with better performance for medical image segmentation. The proposed models are tested on three benchmark datasets, such as blood vessel segmentation in retinal images, skin cancer segmentation, and lung lesion segmentation. The experimental results show superior performance on segmentation tasks compared to equivalent models, including a variant of a fully connected convolutional neural network called SegNet, U-Net, and residual U-Net.
Collapse
Affiliation(s)
- Md Zahangir Alom
- University of Dayton, Department of Electrical and Computer Engineering, Dayton, Ohio, United States
| | - Chris Yakopcic
- University of Dayton, Department of Electrical and Computer Engineering, Dayton, Ohio, United States
| | | | - Tarek M. Taha
- University of Dayton, Department of Electrical and Computer Engineering, Dayton, Ohio, United States
| | - Vijayan K. Asari
- University of Dayton, Department of Electrical and Computer Engineering, Dayton, Ohio, United States
| |
Collapse
|
57
|
CS-Net: Channel and Spatial Attention Network for Curvilinear Structure Segmentation. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32239-7_80] [Citation(s) in RCA: 70] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
58
|
Cheng J, Li Z, Gu Z, Fu H, Wong DWK, Liu J. Structure-Preserving Guided Retinal Image Filtering and Its Application for Optic Disk Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2536-2546. [PMID: 29994522 DOI: 10.1109/tmi.2018.2838550] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Retinal fundus photographs have been used in the diagnosis of many ocular diseases such as glaucoma, pathological myopia, age-related macular degeneration, and diabetic retinopathy. With the development of computer science, computer aided diagnosis has been developed to process and analyze the retinal images automatically. One of the challenges in the analysis is that the quality of the retinal image is often degraded. For example, a cataract in human lens will attenuate the retinal image, just as a cloudy camera lens which reduces the quality of a photograph. It often obscures the details in the retinal images and posts challenges in retinal image processing and analyzing tasks. In this paper, we approximate the degradation of the retinal images as a combination of human-lens attenuation and scattering. A novel structure-preserving guided retinal image filtering (SGRIF) is then proposed to restore images based on the attenuation and scattering model. The proposed SGRIF consists of a step of global structure transferring and a step of global edge-preserving smoothing. Our results show that the proposed SGRIF method is able to improve the contrast of retinal images, measured by histogram flatness measure, histogram spread, and variability of local luminosity. In addition, we further explored the benefits of SGRIF for subsequent retinal image processing and analyzing tasks. In the two applications of deep learning-based optic cup segmentation and sparse learning-based cup-to-disk ratio (CDR) computation, our results show that we are able to achieve more accurate optic cup segmentation and CDR measurements from images processed by SGRIF.
Collapse
|
59
|
Yang Y, Shao F, Fu Z, Fu R. Blood vessel segmentation of fundus images via cross-modality dictionary learning. APPLIED OPTICS 2018; 57:7287-7295. [PMID: 30182990 DOI: 10.1364/ao.57.007287] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Accepted: 08/04/2018] [Indexed: 06/08/2023]
Abstract
Automated retinal blood vessel segmentation is important for the early computer-aided diagnosis of some ophthalmological diseases and cardiovascular disorders. Traditional supervised vessel segmentation methods are usually based on pixel classification, which categorizes all pixels into vessel and non-vessel pixels. In this paper, we propose a new retinal vessel segmentation method with the motivation to extract vessels based on vessel block segmentation via cross-modality dictionary learning. For this, we first enhance the structural information of vessels using multi-scale filtering. Then, cross-modality description and segmentation dictionaries are learned to build the intrinsic relationship between the enhanced vessels and the labeled ground truth vessels for the purpose of vessel segmentation. Also, effective pre-processing and post-processing are adopted to promote the performance. Experimental results on three benchmark data sets demonstrate that the proposed method can achieve good segmentation results.
Collapse
|
60
|
Na T, Xie J, Zhao Y, Zhao Y, Liu Y, Wang Y, Liu J. Retinal vascular segmentation using superpixel-based line operator and its application to vascular topology estimation. Med Phys 2018; 45:3132-3146. [PMID: 29744887 DOI: 10.1002/mp.12953] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Revised: 03/28/2018] [Accepted: 04/22/2018] [Indexed: 02/03/2023] Open
Abstract
PURPOSE Automatic methods of analyzing of retinal vascular networks, such as retinal blood vessel detection, vascular network topology estimation, and arteries/veins classification are of great assistance to the ophthalmologist in terms of diagnosis and treatment of a wide spectrum of diseases. METHODS We propose a new framework for precisely segmenting retinal vasculatures, constructing retinal vascular network topology, and separating the arteries and veins. A nonlocal total variation inspired Retinex model is employed to remove the image intensity inhomogeneities and relatively poor contrast. For better generalizability and segmentation performance, a superpixel-based line operator is proposed as to distinguish between lines and the edges, thus allowing more tolerance in the position of the respective contours. The concept of dominant sets clustering is adopted to estimate retinal vessel topology and classify the vessel network into arteries and veins. RESULTS The proposed segmentation method yields competitive results on three public data sets (STARE, DRIVE, and IOSTAR), and it has superior performance when compared with unsupervised segmentation methods, with accuracy of 0.954, 0.957, and 0.964, respectively. The topology estimation approach has been applied to five public databases (DRIVE,STARE, INSPIRE, IOSTAR, and VICAVR) and achieved high accuracy of 0.830, 0.910, 0.915, 0.928, and 0.889, respectively. The accuracies of arteries/veins classification based on the estimated vascular topology on three public databases (INSPIRE, DRIVE and VICAVR) are 0.90.9, 0.910, and 0.907, respectively. CONCLUSIONS The experimental results show that the proposed framework has effectively addressed crossover problem, a bottleneck issue in segmentation and vascular topology reconstruction. The vascular topology information significantly improves the accuracy on arteries/veins classification.
Collapse
Affiliation(s)
- Tong Na
- Georgetown Preparatory School, North Bethesda, 20852, USA.,Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, 315201, China.,Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Jianyang Xie
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, 315201, China.,Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, 315201, China.,Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Yifan Zhao
- School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield, MK43 0AL, UK
| | - Yue Liu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Jiang Liu
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, 315201, China
| |
Collapse
|