1
|
Zhou W, Wang X, Yang X, Hu Y, Yi Y. Skeleton-guided multi-scale dual-coordinate attention aggregation network for retinal blood vessel segmentation. Comput Biol Med 2024; 181:109027. [PMID: 39178808 DOI: 10.1016/j.compbiomed.2024.109027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 06/06/2024] [Accepted: 08/12/2024] [Indexed: 08/26/2024]
Abstract
Deep learning plays a pivotal role in retinal blood vessel segmentation for medical diagnosis. Despite their significant efficacy, these techniques face two major challenges. Firstly, they often neglect the severe class imbalance in fundus images, where thin vessels in the foreground are proportionally minimal. Secondly, they are susceptible to poor image quality and blurred vessel edges, resulting in discontinuities or breaks in vascular structures. In response, this paper proposes the Skeleton-guided Multi-scale Dual-coordinate Attention Aggregation (SMDAA) network for retinal vessel segmentation. SMDAA comprises three innovative modules: Dual-coordinate Attention (DCA), Unbalanced Pixel Amplifier (UPA), and Vessel Skeleton Guidance (VSG). DCA, integrating Multi-scale Coordinate Feature Aggregation (MCFA) and Scale Coordinate Attention Decoding (SCAD), meticulously analyzes vessel structures across various scales, adept at capturing intricate details, thereby significantly enhancing segmentation accuracy. To address class imbalance, we introduce UPA, dynamically allocating more attention to misclassified pixels, ensuring precise extraction of thin and small blood vessels. Moreover, to preserve vessel structure continuity, we integrate vessel anatomy and develop the VSG module to connect fragmented vessel segments. Additionally, a Feature-level Contrast (FCL) loss is introduced to capture subtle nuances within the same category, enhancing the fidelity of retinal blood vessel segmentation. Extensive experiments on three public datasets (DRIVE, STARE, and CHASE_DB1) demonstrate superior performance in comparison to current methods. The code is available at https://github.com/wangwxr/SMDAA_NET.
Collapse
Affiliation(s)
- Wei Zhou
- College of Computer Science, Shenyang Aerospace University, Shenyang, China
| | - Xiaorui Wang
- College of Computer Science, Shenyang Aerospace University, Shenyang, China
| | - Xuekun Yang
- College of Computer Science, Shenyang Aerospace University, Shenyang, China
| | - Yangtao Hu
- Department of Ophthalmology, The 908th Hospital of Chinese People's Liberation Army Joint Logistic SupportForce, Nanchang, China.
| | - Yugen Yi
- School of Software, Jiangxi Normal University, Nanchang, China.
| |
Collapse
|
2
|
Yang Y, Yue S, Quan H. CS-UNet: Cross-scale U-Net with Semantic-position dependencies for retinal vessel segmentation. NETWORK (BRISTOL, ENGLAND) 2024; 35:134-153. [PMID: 38050997 DOI: 10.1080/0954898x.2023.2288858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 11/23/2023] [Indexed: 12/07/2023]
Abstract
Accurate retinal vessel segmentation is the prerequisite for early recognition and treatment of retina-related diseases. However, segmenting retinal vessels is still challenging due to the intricate vessel tree in fundus images, which has a significant number of tiny vessels, low contrast, and lesion interference. For this task, the u-shaped architecture (U-Net) has become the de-facto standard and has achieved considerable success. However, U-Net is a pure convolutional network, which usually shows limitations in global modelling. In this paper, we propose a novel Cross-scale U-Net with Semantic-position Dependencies (CS-UNet) for retinal vessel segmentation. In particular, we first designed a Semantic-position Dependencies Aggregator (SPDA) and incorporate it into each layer of the encoder to better focus on global contextual information by integrating the relationship of semantic and position. To endow the model with the capability of cross-scale interaction, the Cross-scale Relation Refine Module (CSRR) is designed to dynamically select the information associated with the vessels, which helps guide the up-sampling operation. Finally, we have evaluated CS-UNet on three public datasets: DRIVE, CHASE_DB1, and STARE. Compared to most existing state-of-the-art methods, CS-UNet demonstrated better performance.
Collapse
Affiliation(s)
- Ying Yang
- College of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan, China
| | - Shengbin Yue
- College of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan, China
- Yunnan Provincial Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology, Kunming, Yunnan, China
| | - Haiyan Quan
- College of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan, China
| |
Collapse
|
3
|
Fakhouri HN, Alawadi S, Awaysheh FM, Alkhabbas F, Zraqou J. A cognitive deep learning approach for medical image processing. Sci Rep 2024; 14:4539. [PMID: 38402321 PMCID: PMC10894297 DOI: 10.1038/s41598-024-55061-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 02/20/2024] [Indexed: 02/26/2024] Open
Abstract
In ophthalmic diagnostics, achieving precise segmentation of retinal blood vessels is a critical yet challenging task, primarily due to the complex nature of retinal images. The intricacies of these images often hinder the accuracy and efficiency of segmentation processes. To overcome these challenges, we introduce the cognitive DL retinal blood vessel segmentation (CoDLRBVS), a novel hybrid model that synergistically combines the deep learning capabilities of the U-Net architecture with a suite of advanced image processing techniques. This model uniquely integrates a preprocessing phase using a matched filter (MF) for feature enhancement and a post-processing phase employing morphological techniques (MT) for refining the segmentation output. Also, the model incorporates multi-scale line detection and scale space methods to enhance its segmentation capabilities. Hence, CoDLRBVS leverages the strengths of these combined approaches within the cognitive computing framework, endowing the system with human-like adaptability and reasoning. This strategic integration enables the model to emphasize blood vessels, accurately segment effectively, and proficiently detect vessels of varying sizes. CoDLRBVS achieves a notable mean accuracy of 96.7%, precision of 96.9%, sensitivity of 99.3%, and specificity of 80.4% across all of the studied datasets, including DRIVE, STARE, HRF, retinal blood vessel and Chase-DB1. CoDLRBVS has been compared with different models, and the resulting metrics surpass the compared models and establish a new benchmark in retinal vessel segmentation. The success of CoDLRBVS underscores its significant potential in advancing medical image processing, particularly in the realm of retinal blood vessel segmentation.
Collapse
Affiliation(s)
- Hussam N Fakhouri
- Department of Data Science and Artificial Intelligence, The University of Petra, Amman, Jordan
| | - Sadi Alawadi
- Department of Computer Science, Blekinge Institute of Technology, Karlskrona, Sweden.
- Computer Graphics and Data Engineering (COGRADE) Research Group, University of Santiago de Compostela, Santiago de Compostela, Spain.
| | - Feras M Awaysheh
- Institute of Computer Science, Delta Research Centre, University of Tartu, Tartu, Estonia
| | - Fahed Alkhabbas
- Internet of Things and People Research Center, Malmö University, Malmö, Sweden
- Department of Computer Science and Media Technology, Malmö University, Malmö, Sweden
| | - Jamal Zraqou
- Virtual and Augment Reality Department, Faculty of Information Technology, University of Petra, Amman, Jordan
| |
Collapse
|
4
|
Sebastian A, Elharrouss O, Al-Maadeed S, Almaadeed N. GAN-Based Approach for Diabetic Retinopathy Retinal Vasculature Segmentation. Bioengineering (Basel) 2023; 11:4. [PMID: 38275572 PMCID: PMC10812988 DOI: 10.3390/bioengineering11010004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 12/05/2023] [Accepted: 12/14/2023] [Indexed: 01/27/2024] Open
Abstract
Most diabetes patients develop a condition known as diabetic retinopathy after having diabetes for a prolonged period. Due to this ailment, damaged blood vessels may occur behind the retina, which can even progress to a stage of losing vision. Hence, doctors advise diabetes patients to screen their retinas regularly. Examining the fundus for this requires a long time and there are few ophthalmologists available to check the ever-increasing number of diabetes patients. To address this issue, several computer-aided automated systems are being developed with the help of many techniques like deep learning. Extracting the retinal vasculature is a significant step that aids in developing such systems. This paper presents a GAN-based model to perform retinal vasculature segmentation. The model achieves good results on the ARIA, DRIVE, and HRF datasets.
Collapse
Affiliation(s)
- Anila Sebastian
- Computer Science and Engineering Department, Qatar University, Doha P.O. Box 2713, Qatar; (O.E.); (S.A.-M.); (N.A.)
| | | | | | | |
Collapse
|
5
|
Quiñones R, Samal A, Das Choudhury S, Muñoz-Arriola F. OSC-CO 2: coattention and cosegmentation framework for plant state change with multiple features. FRONTIERS IN PLANT SCIENCE 2023; 14:1211409. [PMID: 38023863 PMCID: PMC10644038 DOI: 10.3389/fpls.2023.1211409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 10/06/2023] [Indexed: 12/01/2023]
Abstract
Cosegmentation and coattention are extensions of traditional segmentation methods aimed at detecting a common object (or objects) in a group of images. Current cosegmentation and coattention methods are ineffective for objects, such as plants, that change their morphological state while being captured in different modalities and views. The Object State Change using Coattention-Cosegmentation (OSC-CO2) is an end-to-end unsupervised deep-learning framework that enhances traditional segmentation techniques, processing, analyzing, selecting, and combining suitable segmentation results that may contain most of our target object's pixels, and then displaying a final segmented image. The framework leverages coattention-based convolutional neural networks (CNNs) and cosegmentation-based dense Conditional Random Fields (CRFs) to address segmentation accuracy in high-dimensional plant imagery with evolving plant objects. The efficacy of OSC-CO2 is demonstrated using plant growth sequences imaged with infrared, visible, and fluorescence cameras in multiple views using a remote sensing, high-throughput phenotyping platform, and is evaluated using Jaccard index and precision measures. We also introduce CosegPP+, a dataset that is structured and can provide quantitative information on the efficacy of our framework. Results show that OSC-CO2 out performed state-of-the art segmentation and cosegmentation methods by improving segementation accuracy by 3% to 45%.
Collapse
Affiliation(s)
- Rubi Quiñones
- School of Computing, University of Nebraska-Lincoln, Lincoln, NE, United States
- Computer Science Department, Southern Illinois University Edwardsville, Edwardsville, IL, United States
| | - Ashok Samal
- School of Computing, University of Nebraska-Lincoln, Lincoln, NE, United States
| | - Sruti Das Choudhury
- School of Computing, University of Nebraska-Lincoln, Lincoln, NE, United States
- School of Natural Resources, University of Nebraska-Lincoln, Lincoln, NE, United States
| | - Francisco Muñoz-Arriola
- School of Natural Resources, University of Nebraska-Lincoln, Lincoln, NE, United States
- Department of Biological Systems Engineering, University of Nebraska-Lincoln, Lincoln, NE, United States
| |
Collapse
|
6
|
Zhu YF, Xu X, Zhang XD, Jiang MS. CCS-UNet: a cross-channel spatial attention model for accurate retinal vessel segmentation. BIOMEDICAL OPTICS EXPRESS 2023; 14:4739-4758. [PMID: 37791275 PMCID: PMC10545190 DOI: 10.1364/boe.495766] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 07/14/2023] [Accepted: 08/09/2023] [Indexed: 10/05/2023]
Abstract
Precise segmentation of retinal vessels plays an important role in computer-assisted diagnosis. Deep learning models have been applied to retinal vessel segmentation, but the efficacy is limited by the significant scale variation of vascular structures and the intricate background of retinal images. This paper supposes a cross-channel spatial attention U-Net (CCS-UNet) for accurate retinal vessel segmentation. In comparison to other models based on U-Net, our model employes a ResNeSt block for the encoder-decoder architecture. The block has a multi-branch structure that enables the model to extract more diverse vascular features. It facilitates weight distribution across channels through the incorporation of soft attention, which effectively aggregates contextual information in vascular images. Furthermore, we suppose an attention mechanism within the skip connection. This mechanism serves to enhance feature integration across various layers, thereby mitigating the degradation of effective information. It helps acquire cross-channel information and enhance the localization of regions of interest, ultimately leading to improved recognition of vascular structures. In addition, the feature fusion module (FFM) module is used to provide semantic information for a more refined vascular segmentation map. We evaluated CCS-UNet based on five benchmark retinal image datasets, DRIVE, CHASEDB1, STARE, IOSTAR and HRF. Our proposed method exhibits superior segmentation efficacy compared to other state-of-the-art techniques with a global accuracy of 0.9617/0.9806/0.9766/0.9786/0.9834 and AUC of 0.9863/0.9894/0.9938/0.9902/0.9855 on DRIVE, CHASEDB1, STARE, IOSTAR and HRF respectively. Ablation studies are also performed to evaluate the the relative contributions of different architectural components. Our proposed model is potential for diagnostic aid of retinal diseases.
Collapse
Affiliation(s)
| | | | - Xue-dian Zhang
- Shanghai Key Laboratory of Contemporary Optics System, College of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Min-shan Jiang
- Shanghai Key Laboratory of Contemporary Optics System, College of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| |
Collapse
|
7
|
Ryu J, Rehman MU, Nizami IF, Chong KT. SegR-Net: A deep learning framework with multi-scale feature fusion for robust retinal vessel segmentation. Comput Biol Med 2023; 163:107132. [PMID: 37343468 DOI: 10.1016/j.compbiomed.2023.107132] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 05/12/2023] [Accepted: 06/04/2023] [Indexed: 06/23/2023]
Abstract
Retinal vessel segmentation is an important task in medical image analysis and has a variety of applications in the diagnosis and treatment of retinal diseases. In this paper, we propose SegR-Net, a deep learning framework for robust retinal vessel segmentation. SegR-Net utilizes a combination of feature extraction and embedding, deep feature magnification, feature precision and interference, and dense multiscale feature fusion to generate accurate segmentation masks. The model consists of an encoder module that extracts high-level features from the input images and a decoder module that reconstructs the segmentation masks by combining features from the encoder module. The encoder module consists of a feature extraction and embedding block that enhances by dense multiscale feature fusion, followed by a deep feature magnification block that magnifies the retinal vessels. To further improve the quality of the extracted features, we use a group of two convolutional layers after each DFM block. In the decoder module, we utilize a feature precision and interference block and a dense multiscale feature fusion block (DMFF) to combine features from the encoder module and reconstruct the segmentation mask. We also incorporate data augmentation and pre-processing techniques to improve the generalization of the trained model. Experimental results on three fundus image publicly available datasets (CHASE_DB1, STARE, and DRIVE) demonstrate that SegR-Net outperforms state-of-the-art models in terms of accuracy, sensitivity, specificity, and F1 score. The proposed framework can provide more accurate and more efficient segmentation of retinal blood vessels in comparison to the state-of-the-art techniques, which is essential for clinical decision-making and diagnosis of various eye diseases.
Collapse
Affiliation(s)
- Jihyoung Ryu
- Electronics and Telecommunications Research Institute, 176-11 Cheomdan Gwagi-ro, Buk-gu, Gwangju 61012, Republic of Korea.
| | - Mobeen Ur Rehman
- Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju 54896, Republic of Korea.
| | - Imran Fareed Nizami
- Department of Electrical Engineering, Bahria University, Islamabad, Pakistan.
| | - Kil To Chong
- Electronics and Telecommunications Research Institute, 176-11 Cheomdan Gwagi-ro, Buk-gu, Gwangju 61012, Republic of Korea; Advances Electronics and Information Research Center, Jeonbuk National University, Jeonju 54896, Republic of Korea.
| |
Collapse
|
8
|
Zhou W, Bai W, Ji J, Yi Y, Zhang N, Cui W. Dual-path multi-scale context dense aggregation network for retinal vessel segmentation. Comput Biol Med 2023; 164:107269. [PMID: 37562323 DOI: 10.1016/j.compbiomed.2023.107269] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 06/22/2023] [Accepted: 07/16/2023] [Indexed: 08/12/2023]
Abstract
There has been steady progress in the field of deep learning-based blood vessel segmentation. However, several challenging issues still continue to limit its progress, including inadequate sample sizes, the neglect of contextual information, and the loss of microvascular details. To address these limitations, we propose a dual-path deep learning framework for blood vessel segmentation. In our framework, the fundus images are divided into concentric patches with different scales to alleviate the overfitting problem. Then, a Multi-scale Context Dense Aggregation Network (MCDAU-Net) is proposed to accurately extract the blood vessel boundaries from these patches. In MCDAU-Net, a Cascaded Dilated Spatial Pyramid Pooling (CDSPP) module is designed and incorporated into intermediate layers of the model, enhancing the receptive field and producing feature maps enriched with contextual information. To improve segmentation performance for low-contrast vessels, we propose an InceptionConv (IConv) module, which can explore deeper semantic features and suppress the propagation of non-vessel information. Furthermore, we design a Multi-scale Adaptive Feature Aggregation (MAFA) module to fuse the multi-scale feature by assigning adaptive weight coefficients to different feature maps through skip connections. Finally, to explore the complementary contextual information and enhance the continuity of microvascular structures, a fusion module is designed to combine the segmentation results obtained from patches of different sizes, achieving fine microvascular segmentation performance. In order to assess the effectiveness of our approach, we conducted evaluations on three widely-used public datasets: DRIVE, CHASE-DB1, and STARE. Our findings reveal a remarkable advancement over the current state-of-the-art (SOTA) techniques, with the mean values of Se and F1 scores being an increase of 7.9% and 4.7%, respectively. The code is available at https://github.com/bai101315/MCDAU-Net.
Collapse
Affiliation(s)
- Wei Zhou
- College of Computer Science, Shenyang Aerospace University, Shenyang, China
| | - Weiqi Bai
- College of Computer Science, Shenyang Aerospace University, Shenyang, China
| | - Jianhang Ji
- College of Computer Science, Shenyang Aerospace University, Shenyang, China
| | - Yugen Yi
- School of Software, Jiangxi Normal University, Nanchang, China.
| | - Ningyi Zhang
- School of Software, Jiangxi Normal University, Nanchang, China
| | - Wei Cui
- Institute for Infocomm Research, The Agency for Science, Technology and Research (A*STAR), Singapore.
| |
Collapse
|
9
|
Tan Y, Zhao SX, Yang KF, Li YJ. A lightweight network guided with differential matched filtering for retinal vessel segmentation. Comput Biol Med 2023; 160:106924. [PMID: 37146492 DOI: 10.1016/j.compbiomed.2023.106924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 04/03/2023] [Accepted: 04/13/2023] [Indexed: 05/07/2023]
Abstract
The geometric morphology of retinal vessels reflects the state of cardiovascular health, and fundus images are important reference materials for ophthalmologists. Great progress has been made in automated vessel segmentation, but few studies have focused on thin vessel breakage and false-positives in areas with lesions or low contrast. In this work, we propose a new network, differential matched filtering guided attention UNet (DMF-AU), to address these issues, incorporating a differential matched filtering layer, feature anisotropic attention, and a multiscale consistency constrained backbone to perform thin vessel segmentation. The differential matched filtering is used for the early identification of locally linear vessels, and the resulting rough vessel map guides the backbone to learn vascular details. Feature anisotropic attention reinforces the vessel features of spatial linearity at each stage of the model. Multiscale constraints reduce the loss of vessel information while pooling within large receptive fields. In tests on multiple classical datasets, the proposed model performed well compared with other algorithms on several specially designed criteria for vessel segmentation. DMF-AU is a high-performance, lightweight vessel segmentation model. The source code is at https://github.com/tyb311/DMF-AU.
Collapse
Affiliation(s)
- Yubo Tan
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Shi-Xuan Zhao
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Kai-Fu Yang
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Yong-Jie Li
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| |
Collapse
|
10
|
Sindhusaranya B, Geetha MR. Retinal blood vessel segmentation using root Guided decision tree assisted enhanced Fuzzy C-mean clustering for disease identification. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
11
|
Rong Y, Xiong Y, Li C, Chen Y, Wei P, Wei C, Fan Z. Segmentation of retinal vessels in fundus images based on U-Net with self-calibrated convolutions and spatial attention modules. Med Biol Eng Comput 2023:10.1007/s11517-023-02806-1. [PMID: 36899285 DOI: 10.1007/s11517-023-02806-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 02/08/2023] [Indexed: 03/12/2023]
Abstract
Automated and accurate segmentation of retinal vessels in fundus images is an important step for screening and diagnosing various ophthalmologic diseases. However, many factors, including the variations of vessels in color, shape and size, make this task become an intricate challenge. One kind of the most popular methods for vessel segmentation is U-Net based methods. However, in the U-Net based methods, the size of the convolution kernels is generally fixed. As a result, the receptive field for an individual convolution operation is single, which is not conducive to the segmentation of retinal vessels with various thicknesses. To overcome this problem, in this paper, we employed self-calibrated convolutions to replace the traditional convolutions for the U-Net, which can make the U-Net learn discriminative representations from different receptive fields. Besides, we proposed an improved spatial attention module, instead of using traditional convolutions, to connect the encoding part and decoding part of the U-Net, which can improve the ability of the U-Net to detect thin vessels. The proposed method has been tested on Digital Retinal Images for Vessel Extraction (DRIVE) database and Child Heart and Health Study in England Database (CHASE DB1). The metrics used to evaluate the performance of the proposed method are accuracy (ACC), sensitivity (SE), specificity (SP), F1-score (F1) and the area under the receiver operating characteristic curve (AUC). The ACC, SE, SP, F1 and AUC obtained by the proposed method are 0.9680, 0.8036, 0.9840, 0.8138 and 0.9840 respectively on DRIVE database, and 0.9756, 0.8118, 0.9867, 0.8068 and 0.9888 respectively on CHASE DB1, which are better than those obtained by the traditional U-Net (the ACC, SE, SP, F1 and AUC obtained by U-Net are 0.9646, 0.7895, 0.9814, 0.7963 and 0.9791 respectively on DRIVE database, and 0.9733, 0.7817, 0.9862, 0.7870 and 0.9810 respectively on CHASE DB1). The experimental results indicate that the proposed modifications in the U-Net are effective for vessel segmentation. The structure of the proposed network.
Collapse
Affiliation(s)
- YiBiao Rong
- Department of Electronic and Information Engineering, Shantou University, 515063, Guangdong, China
- Key Lab of Digital Signal and Image Processing of Guangdong Province, Shantou University, 515063, Guangdong, China
| | - Yu Xiong
- Department of Electronic and Information Engineering, Shantou University, 515063, Guangdong, China
- Key Lab of Digital Signal and Image Processing of Guangdong Province, Shantou University, 515063, Guangdong, China
| | - Chong Li
- Department of Electronic and Information Engineering, Shantou University, 515063, Guangdong, China
- Key Lab of Digital Signal and Image Processing of Guangdong Province, Shantou University, 515063, Guangdong, China
| | - Ying Chen
- Department of Electronic and Information Engineering, Shantou University, 515063, Guangdong, China
- Key Lab of Digital Signal and Image Processing of Guangdong Province, Shantou University, 515063, Guangdong, China
| | - Peiwei Wei
- Department of Electronic and Information Engineering, Shantou University, 515063, Guangdong, China
- Key Lab of Digital Signal and Image Processing of Guangdong Province, Shantou University, 515063, Guangdong, China
- Department of Microbiology and Immunology, Shantou University Medical College, Guangdong, 515041, China
| | - Chuliang Wei
- Department of Electronic and Information Engineering, Shantou University, 515063, Guangdong, China
- Key Lab of Digital Signal and Image Processing of Guangdong Province, Shantou University, 515063, Guangdong, China
| | - Zhun Fan
- Department of Electronic and Information Engineering, Shantou University, 515063, Guangdong, China.
- Key Lab of Digital Signal and Image Processing of Guangdong Province, Shantou University, 515063, Guangdong, China.
| |
Collapse
|
12
|
Challoob M, Gao Y, Busch A, Nikzad M. Separable Paravector Orientation Tensors for Enhancing Retinal Vessels. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:880-893. [PMID: 36331638 DOI: 10.1109/tmi.2022.3219436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Robust detection of retinal vessels remains an unsolved research problem, particularly in handling the intrinsic real-world challenges of highly imbalanced contrast between thick vessels and thin ones, inhomogeneous background regions, uneven illumination, and complex geometries of crossing/bifurcations. This paper presents a new separable paravector orientation tensor that addresses these difficulties by characterizing the enhancement of retinal vessels to be dependent on a nonlinear scale representation, invariant to changes in contrast and lighting, responsive for symmetric patterns, and fitted with elliptical cross-sections. The proposed method is built on projecting vessels as a 3D paravector valued function rotated in an alpha quarter domain, providing geometrical, structural, symmetric, and energetic features. We introduce an innovative symmetrical inhibitory scheme that incorporates paravector features for producing a set of directional contrast-independent elongated-like patterns reconstructing vessel tree in orientation tensors. By fitting constraint elliptical volumes via eigensystem analysis, the final vessel tree is produced with a strong and uniform response preserving various vessel features. The validation of proposed method on clinically relevant retinal images with high-quality results, shows its excellent performance compared to the state-of-the-art benchmarks and the second human observers.
Collapse
|
13
|
Wang L, Ye X, Ju L, He W, Zhang D, Wang X, Huang Y, Feng W, Song K, Ge Z. Medical matting: Medical image segmentation with uncertainty from the matting perspective. Comput Biol Med 2023; 158:106714. [PMID: 37003068 DOI: 10.1016/j.compbiomed.2023.106714] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 02/10/2023] [Accepted: 02/26/2023] [Indexed: 03/05/2023]
Abstract
High-quality manual labeling of ambiguous and complex-shaped targets with binary masks can be challenging. The weakness of insufficient expression of binary masks is prominent in segmentation, especially in medical scenarios where blurring is prevalent. Thus, reaching a consensus among clinicians through binary masks is more difficult in multi-person labeling cases. These inconsistent or uncertain areas are related to the lesions' structure and may contain anatomical information conducive to providing an accurate diagnosis. However, recent research focuses on uncertainties of model training and data labeling. None of them has investigated the influence of the ambiguous nature of the lesion itself. Inspired by image matting, this paper introduces a soft mask called alpha matte to medical scenes. It can describe the lesions with more details better than a binary mask. Moreover, it can also be used as a new uncertainty quantification method to represent uncertain areas, filling the gap in research on the uncertainty of lesion structure. In this work, we introduce a multi-task framework to generate binary masks and alpha mattes, which outperforms all state-of-the-art matting algorithms compared. The uncertainty map is proposed to imitate the trimap in matting methods, which can highlight fuzzy areas and improve matting performance. We have created three medical datasets with alpha mattes to address the lack of available matting datasets in medical fields and evaluated the effectiveness of our proposed method on them comprehensively. Furthermore, experiments demonstrate that the alpha matte is a more effective labeling method than the binary mask from both qualitative and quantitative aspects.
Collapse
|
14
|
Nan Y, Tang P, Zhang G, Zeng C, Liu Z, Gao Z, Zhang H, Yang G. Unsupervised Tissue Segmentation via Deep Constrained Gaussian Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3799-3811. [PMID: 35905069 DOI: 10.1109/tmi.2022.3195123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Tissue segmentation is the mainstay of pathological examination, whereas the manual delineation is unduly burdensome. To assist this time-consuming and subjective manual step, researchers have devised methods to automatically segment structures in pathological images. Recently, automated machine and deep learning based methods dominate tissue segmentation research studies. However, most machine and deep learning based approaches are supervised and developed using a large number of training samples, in which the pixel-wise annotations are expensive and sometimes can be impossible to obtain. This paper introduces a novel unsupervised learning paradigm by integrating an end-to-end deep mixture model with a constrained indicator to acquire accurate semantic tissue segmentation. This constraint aims to centralise the components of deep mixture models during the calculation of the optimisation function. In so doing, the redundant or empty class issues, which are common in current unsupervised learning methods, can be greatly reduced. By validation on both public and in-house datasets, the proposed deep constrained Gaussian network achieves significantly (Wilcoxon signed-rank test) better performance (with the average Dice scores of 0.737 and 0.735, respectively) on tissue segmentation with improved stability and robustness, compared to other existing unsupervised segmentation approaches. Furthermore, the proposed method presents a similar performance (p-value >0.05) compared to the fully supervised U-Net.
Collapse
|
15
|
Wang L, Ye X, Zhang D, He W, Ju L, Luo Y, Luo H, Wang X, Feng W, Song K, Zhao X, Ge Z. 3D matting: A benchmark study on soft segmentation method for pulmonary nodules applied in computed tomography. Comput Biol Med 2022; 150:106153. [PMID: 36228464 DOI: 10.1016/j.compbiomed.2022.106153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 08/20/2022] [Accepted: 09/24/2022] [Indexed: 11/22/2022]
Abstract
Usually, lesions are not isolated but are associated with the surrounding tissues. For example, the growth of a tumour can depend on or infiltrate into the surrounding tissues. Due to the pathological nature of the lesions, it is challenging to distinguish their boundaries in medical imaging. However, these uncertain regions may contain diagnostic information. Therefore, the simple binarization of lesions by traditional binary segmentation can result in the loss of diagnostic information. In this work, we introduce the image matting into the 3D scenes and use the alpha matte, i.e., a soft mask, to describe lesions in a 3D medical image. The traditional soft mask acted as a training trick to compensate for the easily mislabelled or under-labelled ambiguous regions. In contrast, 3D matting uses soft segmentation to characterize the uncertain regions more finely, which means that it retains more structural information for subsequent diagnosis and treatment. The current study of image matting methods in 3D is limited. To address this issue, we conduct a comprehensive study of 3D matting, including both traditional and deep-learning-based methods. We adapt four state-of-the-art 2D image matting algorithms to 3D scenes and further customize the methods for CT images to calibrate the alpha matte with the radiodensity. Moreover, we propose the first end-to-end deep 3D matting network and implement a solid 3D medical image matting benchmark. Its efficient counterparts are also proposed to achieve a good performance-computation balance. Furthermore, there is no high-quality annotated dataset related to 3D matting, slowing down the development of data-driven deep-learning-based methods. To address this issue, we construct the first 3D medical matting dataset. The validity of the dataset was verified through clinicians' assessments and downstream experiments. The dataset and codes will be released to encourage further research.1.
Collapse
Affiliation(s)
- Lin Wang
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, China; Monash Medical AI Group, Monash University, Clayton, Australia; Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Xiufen Ye
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin, China.
| | - Donghao Zhang
- Monash Medical AI Group, Monash University, Clayton, Australia
| | - Wanji He
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Lie Ju
- Monash Medical AI Group, Monash University, Clayton, Australia; Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Yi Luo
- Chongqing Hospital of Traditional Chinese Medicine, Chongqing, China
| | - Huan Luo
- Chongqing Renji Hospital of Chinese Academy of Sciences, Chongqing, China
| | - Xin Wang
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Wei Feng
- Monash Medical AI Group, Monash University, Clayton, Australia; Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Kaimin Song
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Xin Zhao
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Zongyuan Ge
- Monash Medical AI Group, Monash University, Clayton, Australia; Beijing Airdoc Technology Co., Ltd., Beijing, China.
| |
Collapse
|
16
|
Zhong X, Zhang H, Li G, Ji D. Do you need sharpened details? Asking MMDC-Net: Multi-layer multi-scale dilated convolution network for retinal vessel segmentation. Comput Biol Med 2022; 150:106198. [PMID: 37859292 DOI: 10.1016/j.compbiomed.2022.106198] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 09/19/2022] [Accepted: 10/09/2022] [Indexed: 11/24/2022]
Abstract
Convolutional neural networks (CNN), especially numerous U-shaped models, have achieved great progress in retinal vessel segmentation. However, a great quantity of global information in fundus images has not been fully explored. And the class imbalance problem of background and blood vessels is still serious. To alleviate these issues, we design a novel multi-layer multi-scale dilated convolution network (MMDC-Net) based on U-Net. We propose an MMDC module to capture sufficient global information under diverse receptive fields through a cascaded mode. Then, we place a new multi-layer fusion (MLF) module behind the decoder, which can not only fuse complementary features but filter noisy information. This enables MMDC-Net to capture the blood vessel details after continuous up-sampling. Finally, we employ a recall loss to resolve the class imbalance problem. Extensive experiments have been done on diverse fundus color image datasets, including STARE, CHASEDB1, DRIVE, and HRF. HRF has a large resolution of 3504 × 2336 whereas others have a small resolution of slightly more than 512 × 512. Qualitative and quantitative results verify the superiority of MMDC-Net. Notably, satisfactory accuracy and sensitivity are acquired by our model. Hence, some key blood vessel details are sharpened. In addition, a large number of further validations and discussions prove the effectiveness and generalization of the proposed MMDC-Net.
Collapse
Affiliation(s)
- Xiang Zhong
- School of Software, East China Jiaotong University, China
| | - Hongbin Zhang
- School of Software, East China Jiaotong University, China.
| | - Guangli Li
- School of Information Engineering, East China Jiaotong University, China
| | - Donghong Ji
- School of Cyber Science and Engineering, Wuhan University, China
| |
Collapse
|
17
|
Li Y, Zhang Y, Cui W, Lei B, Kuang X, Zhang T. Dual Encoder-Based Dynamic-Channel Graph Convolutional Network With Edge Enhancement for Retinal Vessel Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1975-1989. [PMID: 35167444 DOI: 10.1109/tmi.2022.3151666] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Retinal vessel segmentation with deep learning technology is a crucial auxiliary method for clinicians to diagnose fundus diseases. However, the deep learning approaches inevitably lose the edge information, which contains spatial features of vessels while performing down-sampling, leading to the limited segmentation performance of fine blood vessels. Furthermore, the existing methods ignore the dynamic topological correlations among feature maps in the deep learning framework, resulting in the inefficient capture of the channel characterization. To address these limitations, we propose a novel dual encoder-based dynamic-channel graph convolutional network with edge enhancement (DE-DCGCN-EE) for retinal vessel segmentation. Specifically, we first design an edge detection-based dual encoder to preserve the edge of vessels in down-sampling. Secondly, we investigate a dynamic-channel graph convolutional network to map the image channels to the topological space and synthesize the features of each channel on the topological map, which solves the limitation of insufficient channel information utilization. Finally, we study an edge enhancement block, aiming to fuse the edge and spatial features in the dual encoder, which is beneficial to improve the accuracy of fine blood vessel segmentation. Competitive experimental results on five retinal image datasets validate the efficacy of the proposed DE-DCGCN-EE, which achieves more remarkable segmentation results against the other state-of-the-art methods, indicating its potential clinical application.
Collapse
|
18
|
Zhang H, Zhong X, Li Z, Chen Y, Zhu Z, Lv J, Li C, Zhou Y, Li G. TiM-Net: Transformer in M-Net for Retinal Vessel Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:9016401. [PMID: 35859930 PMCID: PMC9293566 DOI: 10.1155/2022/9016401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/04/2022] [Accepted: 06/21/2022] [Indexed: 11/17/2022]
Abstract
retinal image is a crucial window for the clinical observation of cardiovascular, cerebrovascular, or other correlated diseases. Retinal vessel segmentation is of great benefit to the clinical diagnosis. Recently, the convolutional neural network (CNN) has become a dominant method in the retinal vessel segmentation field, especially the U-shaped CNN models. However, the conventional encoder in CNN is vulnerable to noisy interference, and the long-rang relationship in fundus images has not been fully utilized. In this paper, we propose a novel model called Transformer in M-Net (TiM-Net) based on M-Net, diverse attention mechanisms, and weighted side output layers to efficaciously perform retinal vessel segmentation. First, to alleviate the effects of noise, a dual-attention mechanism based on channel and spatial is designed. Then the self-attention mechanism in Transformer is introduced into skip connection to re-encode features and model the long-range relationship explicitly. Finally, a weighted SideOut layer is proposed for better utilization of the features from each side layer. Extensive experiments are conducted on three public data sets to show the effectiveness and robustness of our TiM-Net compared with the state-of-the-art baselines. Both quantitative and qualitative results prove its clinical practicality. Moreover, variants of TiM-Net also achieve competitive performance, demonstrating its scalability and generalization ability. The code of our model is available at https://github.com/ZX-ECJTU/TiM-Net.
Collapse
Affiliation(s)
- Hongbin Zhang
- School of Software, East China Jiaotong University, Nanchang, China
| | - Xiang Zhong
- School of Software, East China Jiaotong University, Nanchang, China
| | - Zhijie Li
- School of Software, East China Jiaotong University, Nanchang, China
| | - Yanan Chen
- School of International, East China Jiaotong University, Nanchang, China
| | - Zhiliang Zhu
- School of Software, East China Jiaotong University, Nanchang, China
| | - Jingqin Lv
- School of Software, East China Jiaotong University, Nanchang, China
| | - Chuanxiu Li
- School of Information Engineering, East China Jiaotong University, Nanchang, China
| | - Ying Zhou
- Medical School, Nanchang University, Nanchang, China
| | - Guangli Li
- School of Information Engineering, East China Jiaotong University, Nanchang, China
| |
Collapse
|
19
|
Fundus Retinal Vessels Image Segmentation Method Based on Improved U-Net. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2022.03.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
20
|
DAVS-NET: Dense Aggregation Vessel Segmentation Network for retinal vasculature detection in fundus images. PLoS One 2022; 16:e0261698. [PMID: 34972109 PMCID: PMC8719769 DOI: 10.1371/journal.pone.0261698] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Accepted: 12/07/2021] [Indexed: 12/26/2022] Open
Abstract
In this era, deep learning-based medical image analysis has become a reliable source in assisting medical practitioners for various retinal disease diagnosis like hypertension, diabetic retinopathy (DR), arteriosclerosis glaucoma, and macular edema etc. Among these retinal diseases, DR can lead to vision detachment in diabetic patients which cause swelling of these retinal blood vessels or even can create new vessels. This creation or the new vessels and swelling can be analyzed as biomarker for screening and analysis of DR. Deep learning-based semantic segmentation of these vessels can be an effective tool to detect changes in retinal vasculature for diagnostic purposes. This segmentation task becomes challenging because of the low-quality retinal images with different image acquisition conditions, and intensity variations. Existing retinal blood vessels segmentation methods require a large number of trainable parameters for training of their networks. This paper introduces a novel Dense Aggregation Vessel Segmentation Network (DAVS-Net), which can achieve high segmentation performance with only a few trainable parameters. For faster convergence, this network uses an encoder-decoder framework in which edge information is transferred from the first layers of the encoder to the last layer of the decoder. Performance of the proposed network is evaluated on publicly available retinal blood vessels datasets of DRIVE, CHASE_DB1, and STARE. Proposed method achieved state-of-the-art segmentation accuracy using a few number of trainable parameters.
Collapse
|
21
|
Jiang Y, Chen W, Liu M, Wang Y, Meijering E. DeepRayburst for Automatic Shape Analysis of Tree-Like Structures in Biomedical Images. IEEE J Biomed Health Inform 2021; 26:2204-2215. [PMID: 34727041 DOI: 10.1109/jbhi.2021.3124514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Precise quantification of tree-like structures from biomedical images, such as neuronal shape reconstruction and retinal blood vessel caliber estimation, is increasingly important in understanding normal function and pathologic processes in biology. Some handcrafted methods have been proposed for this purpose in recent years. However, they are designed only for a specific application. In this paper, we propose a shape analysis algorithm, DeepRayburst, that can be applied to many different applications based on a Multi-Feature Rayburst Sampling (MFRS) and a Dual Channel Temporal Convolutional Network (DC-TCN). Specifically, we first generate a Rayburst Sampling (RS) core containing a set of multidirectional rays. Then the MFRS is designed by extending each ray of the RS to multiple parallel rays which extract a set of feature sequences. A Gaussian kernel is then used to fuse these feature sequences and outputs one feature sequence. Furthermore, we design a DC-TCN to make the rays terminate on the surface of tree-like structures according to the fused feature sequence. Finally, by analyzing the distribution patterns of the terminated rays, the algorithm can serve multiple shape analysis applications of tree-like structures. Experiments on three different applications, including soma shape reconstruction, neuronal shape reconstruction, and vessel caliber estimation, confirm that the proposed method outperforms other state-of-the-art shape analysis methods, which demonstrate its flexibility and robustness.
Collapse
|
22
|
Ding L, Kuriyan AE, Ramchandran RS, Wykoff CC, Sharma G. Weakly-Supervised Vessel Detection in Ultra-Widefield Fundus Photography via Iterative Multi-Modal Registration and Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2748-2758. [PMID: 32991281 PMCID: PMC8513803 DOI: 10.1109/tmi.2020.3027665] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We propose a deep-learning based annotation-efficient framework for vessel detection in ultra-widefield (UWF) fundus photography (FP) that does not require de novo labeled UWF FP vessel maps. Our approach utilizes concurrently captured UWF fluorescein angiography (FA) images, for which effective deep learning approaches have recently become available, and iterates between a multi-modal registration step and a weakly-supervised learning step. In the registration step, the UWF FA vessel maps detected with a pre-trained deep neural network (DNN) are registered with the UWF FP via parametric chamfer alignment. The warped vessel maps can be used as the tentative training data but inevitably contain incorrect (noisy) labels due to the differences between FA and FP modalities and the errors in the registration. In the learning step, a robust learning method is proposed to train DNNs with noisy labels. The detected FP vessel maps are used for the registration in the following iteration. The registration and the vessel detection benefit from each other and are progressively improved. Once trained, the UWF FP vessel detection DNN from the proposed approach allows FP vessel detection without requiring concurrently captured UWF FA images. We validate the proposed framework on a new UWF FP dataset, PRIME-FP20, and on existing narrow-field FP datasets. Experimental evaluation, using both pixel-wise metrics and the CAL metrics designed to provide better agreement with human assessment, shows that the proposed approach provides accurate vessel detection, without requiring manually labeled UWF FP training data.
Collapse
|
23
|
Abdul Rahman A, Biswal B, P GP, Hasan S, Sairam M. Robust segmentation of vascular network using deeply cascaded AReN-UNet. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102953] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
24
|
Tabassum N, Wang J, Ferguson M, Herz J, Dong M, Louveau A, Kipnis J, Acton ST. Image segmentation for neuroscience: lymphatics. JPHYS PHOTONICS 2021. [DOI: 10.1088/2515-7647/ac050e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Abstract
A recent discovery in neuroscience prompts the need for innovation in image analysis. Neuroscientists have discovered the existence of meningeal lymphatic vessels in the brain and have shown their importance in preventing cognitive decline in mouse models of Alzheimer’s disease. With age, lymphatic vessels narrow and poorly drain cerebrospinal fluid, leading to plaque accumulation, a marker for Alzheimer’s disease. The detection of vessel boundaries and width are performed by hand in current practice and thereby suffer from high error rates and potential observer bias. The existing vessel segmentation methods are dependent on user-defined initialization, which is time-consuming and difficult to achieve in practice due to high amounts of background clutter and noise. This work proposes a level set segmentation method featuring hierarchical matting, LyMPhi, to predetermine foreground and background regions. The level set force field is modulated by the foreground information computed by matting, while also constraining the segmentation contour to be smooth. Segmentation output from this method has a higher overall Dice coefficient and boundary F1-score compared to that of competing algorithms. The algorithms are tested on real and synthetic data generated by our novel shape deformation based approach. LyMPhi is also shown to be more stable under different initial conditions as compared to existing level set segmentation methods. Finally, statistical analysis on manual segmentation is performed to prove the variation and disagreement between three annotators.
Collapse
|
25
|
Ramos-Soto O, Rodríguez-Esparza E, Balderas-Mata SE, Oliva D, Hassanien AE, Meleppat RK, Zawadzki RJ. An efficient retinal blood vessel segmentation in eye fundus images by using optimized top-hat and homomorphic filtering. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 201:105949. [PMID: 33567382 DOI: 10.1016/j.cmpb.2021.105949] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Accepted: 01/18/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic segmentation of retinal blood vessels makes a major contribution in CADx of various ophthalmic and cardiovascular diseases. A procedure to segment thin and thick retinal vessels is essential for medical analysis and diagnosis of related diseases. In this article, a novel methodology for robust vessel segmentation is proposed, handling the existing challenges presented in the literature. METHODS The proposed methodology consists of three stages, pre-processing, main processing, and post-processing. The first stage consists of applying filters for image smoothing. The main processing stage is divided into two configurations, the first to segment thick vessels through the new optimized top-hat, homomorphic filtering, and median filter. Then, the second configuration is used to segment thin vessels using the proposed optimized top-hat, homomorphic filtering, matched filter, and segmentation using the MCET-HHO multilevel algorithm. Finally, morphological image operations are carried out in the post-processing stage. RESULTS The proposed approach was assessed by using two publicly available databases (DRIVE and STARE) through three performance metrics: specificity, sensitivity, and accuracy. Analyzing the obtained results, an average of 0.9860, 0.7578 and 0.9667 were respectively achieved for DRIVE dataset and 0.9836, 0.7474 and 0.9580 for STARE dataset. CONCLUSIONS The numerical results obtained by the proposed technique, achieve competitive average values with the up-to-date techniques. The proposed approach outperform all leading unsupervised methods discussed in terms of specificity and accuracy. In addition, it outperforms most of the state-of-the-art supervised methods without the computational cost associated with these algorithms. Detailed visual analysis has shown that a more precise segmentation of thin vessels was possible with the proposed approach when compared with other procedures.
Collapse
Affiliation(s)
- Oscar Ramos-Soto
- División de Electrónica y Computación, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, C.P. 44430, Guadalajara, Jal., Mexico.
| | - Erick Rodríguez-Esparza
- División de Electrónica y Computación, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, C.P. 44430, Guadalajara, Jal., Mexico; DeustoTech, Faculty of Engineering, University of Deusto, Av. Universidades, 24, 48007 Bilbao, Spain.
| | - Sandra E Balderas-Mata
- División de Electrónica y Computación, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, C.P. 44430, Guadalajara, Jal., Mexico.
| | - Diego Oliva
- División de Electrónica y Computación, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, C.P. 44430, Guadalajara, Jal., Mexico; IN3 - Computer Science Dept., Universitat Oberta de Catalunya, Castelldefels, Spain.
| | | | - Ratheesh K Meleppat
- UC Davis Eyepod Imaging Laboratory, Dept. of Cell Biology and Human Anatomy, University of California Davis, Davis, CA 95616, USA; Dept. of Ophthalmology & Vision Science, University of California Davis, Sacramento, CA, USA.
| | - Robert J Zawadzki
- UC Davis Eyepod Imaging Laboratory, Dept. of Cell Biology and Human Anatomy, University of California Davis, Davis, CA 95616, USA; Dept. of Ophthalmology & Vision Science, University of California Davis, Sacramento, CA, USA.
| |
Collapse
|
26
|
Maharjan A, Alsadoon A, Prasad PWC, AlSallami N, Rashid TA, Alrubaie A, Haddad S. A novel solution of using mixed reality in bowel and oral and maxillofacial surgical telepresence: 3D mean value cloning algorithm. Int J Med Robot 2021; 17:e2224. [PMID: 33426753 DOI: 10.1002/rcs.2224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 09/01/2020] [Accepted: 09/01/2020] [Indexed: 11/07/2022]
Abstract
BACKGROUND AND AIM Most of the mixed reality models used in the surgical telepresence are suffering from the discrepancies in the boundary area and spatial-temporal inconsistency due to the illumination variation in the video frames. The aim behind this work is to propose a new solution that helps produce the composite video by merging the augmented video of the surgery site and virtual hand of the remote expertise surgeon. The purpose of the proposed solution is to decrease the processing time and enhance the accuracy of merged video by decreasing the overlay and visualization error and removing occlusion and artefacts. METHODOLOGY The proposed system enhanced mean-value cloning algorithm that helps to maintain the spatial-temporal consistency of the final composite video. The enhanced algorithm includes the three-dimensional mean-value coordinates and improvised mean-value interpolant in the image cloning process, which helps to reduce the sawtooth, smudging and discolouration artefacts around the blending region. RESULTS The accuracy in terms of overlay error of the proposed solution is improved from 1.01 to 0.80 mm, whereas the accuracy in terms of visualization error is improved from 98.8% to 99.4%. The processing time is reduced to 0.173 s from 0.211 s. The processing time and the accuracy of the proposed solution are enhanced as compared to the state-of-art solution. CONCLUSION Our solution helps make the object of interest consistent with the light intensity of the target image by adding the space distance that helps maintain the spatial consistency in the final merged video.
Collapse
Affiliation(s)
- Arjina Maharjan
- School of Computing and Mathematics, Charles Sturt University (CSU), Sydney, Australia
| | - Abeer Alsadoon
- School of Computing and Mathematics, Charles Sturt University (CSU), Sydney, Australia.,School of Computer Data and Mathematical Sciences, University of Western Sydney (UWS), Sydney, Australia.,School of Information Technology, Southern Cross University (SCU), Sydney, Australia.,Information Technology Department, Asia Pacific International College (APIC), Sydney, Australia
| | - P W C Prasad
- School of Computing and Mathematics, Charles Sturt University (CSU), Sydney, Australia
| | - Nada AlSallami
- Computer Science Department, Worcester State University, Massachusetts, USA
| | - Tarik A Rashid
- Computer Science and Engineering, University of Kurdistan Hewler, Erbil, KRG, Iraq
| | - Ahmad Alrubaie
- Faculty of Medicine, University of New South Wales, Sydney, Australia
| | - Sami Haddad
- Department of Oral and Maxillofacial Services, Greater Western Sydney Area Health Services, Australia.,Department of Oral and Maxillofacial Services, Central Coast Area Health, Australia
| |
Collapse
|
27
|
Bilal A, Sun G, Mazhar S. Survey on recent developments in automatic detection of diabetic retinopathy. J Fr Ophtalmol 2021; 44:420-440. [PMID: 33526268 DOI: 10.1016/j.jfo.2020.08.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 08/24/2020] [Indexed: 12/13/2022]
Abstract
Diabetic retinopathy (DR) is a disease facilitated by the rapid spread of diabetes worldwide. DR can blind diabetic individuals. Early detection of DR is essential to restoring vision and providing timely treatment. DR can be detected manually by an ophthalmologist, examining the retinal and fundus images to analyze the macula, morphological changes in blood vessels, hemorrhage, exudates, and/or microaneurysms. This is a time consuming, costly, and challenging task. An automated system can easily perform this function by using artificial intelligence, especially in screening for early DR. Recently, much state-of-the-art research relevant to the identification of DR has been reported. This article describes the current methods of detecting non-proliferative diabetic retinopathy, exudates, hemorrhage, and microaneurysms. In addition, the authors point out future directions in overcoming current challenges in the field of DR research.
Collapse
Affiliation(s)
- A Bilal
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China.
| | - G Sun
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| | - S Mazhar
- Faculty of Information Technology, Beijing University of Technology, Chaoyang District, Beijing 100124, China
| |
Collapse
|
28
|
Wang B, Shi H, Cui E, Zhao H, Yang D, Zhu J, Dou S. A robust and efficient framework for tubular structure segmentation in chest CT images. Technol Health Care 2021; 29:655-665. [PMID: 33427700 DOI: 10.3233/thc-202431] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Tubular structure segmentation in chest CT images can reduce false positives (FPs) dramatically and improve the performance of nodules malignancy levels classification. OBJECTIVE In this study, we present a framework that can segment the pulmonary tubular structure regions robustly and efficiently. METHODS Firstly, we formulate a global tubular structure identification model based on Frangi filter. The model can recognize irregular vascular structures including bifurcation, small vessel, and junction, robustly and sensitively in 2D images. In addition, to segment the vessels from JVN, we design a local tubular structure identification model with a sliding window. Finally, we propose a multi-view voxel discriminating scheme on the basis of the previous two models. This scheme reduces the computational complexity of obtaining high entropy spatial tubular structure information. RESULTS Experimental results have shown that the proposed framework achieves TPR of 85.79%, FPR of 24.83%, and ACC of 84.47% with the average elapsed time of 162.9 seconds. CONCLUSIONS The framework provides an automated approach for effectively segmenting tubular structure from the chest CT images.
Collapse
Affiliation(s)
- Bin Wang
- Embedded Technology Laboratory, School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China.,Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, Shenyang, Liaoning, China
| | - Han Shi
- Embedded Technology Laboratory, School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China.,Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, Shenyang, Liaoning, China
| | - Enuo Cui
- Embedded Technology Laboratory, School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China.,School of Information Science and Engineering, Shenyang University, Shenyang, Liaoning, China
| | - Hai Zhao
- Embedded Technology Laboratory, School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China.,Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, Shenyang, Liaoning, China
| | - Dongxiang Yang
- Affiliated Hospital of Liaoning University of Traditional Chinese Medicine, Shenyang, Liaoning, China
| | - Jian Zhu
- Embedded Technology Laboratory, School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Shengchang Dou
- Embedded Technology Laboratory, School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| |
Collapse
|
29
|
A Hybrid Unsupervised Approach for Retinal Vessel Segmentation. BIOMED RESEARCH INTERNATIONAL 2020; 2020:8365783. [PMID: 33381585 PMCID: PMC7749777 DOI: 10.1155/2020/8365783] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 11/26/2020] [Indexed: 12/04/2022]
Abstract
Retinal vessel segmentation (RVS) is a significant source of useful information for monitoring, identification, initial medication, and surgical development of ophthalmic disorders. Most common disorders, i.e., stroke, diabetic retinopathy (DR), and cardiac diseases, often change the normal structure of the retinal vascular network. A lot of research has been committed to building an automatic RVS system. But, it is still an open issue. In this article, a framework is recommended for RVS with fast execution and competing outcomes. An initial binary image is obtained by the application of the MISODATA on the preprocessed image. For vessel structure enhancement, B-COSFIRE filters are utilized along with thresholding to obtain another binary image. These two binary images are combined by logical AND-type operation. Then, it is fused with the enhanced image of B-COSFIRE filters followed by thresholding to obtain the vessel location map (VLM). The methodology is verified on four different datasets: DRIVE, STARE, HRF, and CHASE_DB1, which are publicly accessible for benchmarking and validation. The obtained results are compared with the existing competing methods.
Collapse
|
30
|
Yang G, Lv T, Shen Y, Li S, Yang J, Chen Y, Shu H, Luo L, Coatrieux JL. Vessel Structure Extraction using Constrained Minimal Path Propagation. Artif Intell Med 2020; 105:101846. [PMID: 32505425 DOI: 10.1016/j.artmed.2020.101846] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2018] [Revised: 10/23/2019] [Accepted: 03/20/2020] [Indexed: 11/18/2022]
Abstract
Minimal path method has been widely recognized as an efficient tool for extracting vascular structures in medical imaging. In a previous paper, a method termed minimal path propagation with backtracking (MPP-BT) was derived to deal with curve-like structures such as vessel centerlines. A robust approach termed CMPP (constrained minimal path propagation) is here proposed to extend this work. The proposed method utilizes another minimal path propagation procedure to extract the complete vessel lumen after the centerlines have been found. Moreover, a process named local MPP-BT is applied to handle structure missing caused by the so-called close loop problems. This approach is fast and unsupervised with only one roughly set start point required in the whole process to get the entire vascular structure. A variety of datasets, including 2D cardiac angiography, 2D retinal images and 3D kidney CT angiography, are used for validation. A quantitative evaluation, together with a comparison to recently reported methods, is performed on retinal images for which a ground truth is available. The proposed method leads to specificity (Sp) and sensitivity (Se) values equal to 0.9750 and 0.6591. This evaluation is also extended to 3D synthetic vascular datasets and shows that the specificity (Sp) and sensitivity (Se) values are higher than 0.99. Parameter setting and computation cost are analyzed in this paper.
Collapse
Affiliation(s)
- Guanyu Yang
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China; Centre de Recherche en Information Biomedicale Sino-Francais (LIA CRIBs), Rennes, France; Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, Nanjing 210096, China
| | - Tianling Lv
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China; Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, Nanjing 210096, China
| | - Yunpeng Shen
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China
| | - Shuo Li
- Department of Medical Imaging, Western University, London, ON, Canada; Digital Image Group of London, London, ON, Canada
| | - Jian Yang
- Key Laboratory of Photoelectronic Imaging Technology and System, Ministry of Education, China.
| | - Yang Chen
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China; Centre de Recherche en Information Biomedicale Sino-Francais (LIA CRIBs), Rennes, France; Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, Nanjing 210096, China.
| | - Huazhong Shu
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China; Centre de Recherche en Information Biomedicale Sino-Francais (LIA CRIBs), Rennes, France; Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, Nanjing 210096, China
| | - Limin Luo
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China; Centre de Recherche en Information Biomedicale Sino-Francais (LIA CRIBs), Rennes, France; Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, Nanjing 210096, China
| | - Jean-Louis Coatrieux
- Centre de Recherche en Information Biomedicale Sino-Francais (LIA CRIBs), Rennes, France
| |
Collapse
|
31
|
NFN+: A novel network followed network for retinal vessel segmentation. Neural Netw 2020; 126:153-162. [DOI: 10.1016/j.neunet.2020.02.018] [Citation(s) in RCA: 59] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 01/28/2020] [Accepted: 02/26/2020] [Indexed: 11/21/2022]
|
32
|
Ding L, Bawany MH, Kuriyan AE, Ramchandran RS, Wykoff CC, Sharma G. A Novel Deep Learning Pipeline for Retinal Vessel Detection In Fluorescein Angiography. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:10.1109/TIP.2020.2991530. [PMID: 32396087 PMCID: PMC7648732 DOI: 10.1109/tip.2020.2991530] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
While recent advances in deep learning have significantly advanced the state of the art for vessel detection in color fundus (CF) images, the success for detecting vessels in fluorescein angiography (FA) has been stymied due to the lack of labeled ground truth datasets. We propose a novel pipeline to detect retinal vessels in FA images using deep neural networks (DNNs) that reduces the effort required for generating labeled ground truth data by combining two key components: cross-modality transfer and human-in-the-loop learning. The cross-modality transfer exploits concurrently captured CF and fundus FA images. Binary vessels maps are first detected from CF images with a pre-trained neural network and then are geometrically registered with and transferred to FA images via robust parametric chamfer alignment to a preliminary FA vessel detection obtained with an unsupervised technique. Using the transferred vessels as initial ground truth labels for deep learning, the human-in-the-loop approach progressively improves the quality of the ground truth labeling by iterating between deep-learning and labeling. The approach significantly reduces manual labeling effort while increasing engagement. We highlight several important considerations for the proposed methodology and validate the performance on three datasets. Experimental results demonstrate that the proposed pipeline significantly reduces the annotation effort and the resulting deep learning methods outperform prior existing FA vessel detection methods by a significant margin. A new public dataset, RECOVERY-FA19, is introduced that includes high-resolution ultra-widefield images and accurately labeled ground truth binary vessel maps.
Collapse
|
33
|
Liu B, Zhang X, Yang L, Zhang J. Three-dimensional organ extraction method for color volume image based on the closed-form solution strategy. Quant Imaging Med Surg 2020; 10:862-870. [PMID: 32355650 DOI: 10.21037/qims.2020.03.21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
With the rapid development of computer technology, surgical training, and the digitalized teaching of human body morphology are gaining prominence in medical education. Accurate, true organ models are essential digital material for these computer-assisted systems. However, no direct three-dimensional (3D) true organ model acquisition method currently exists. Thus, the direct extraction of the interested organ models based on the existing Virtual Human Project (VHP) image set is urgently needed. In this paper, a closed-form solution-based volume matting method is proposed. Using a small quantity of graffiti in the foreground and background, target 3D regions can be extracted by closed-form solution computing. The upper triangular storage strategy and the preconditioned conjugate-gradient (PCG) method also promote robustness. Four image data sets (2 virtual human male and 2 virtual human female) from the United States National Library of Medicine (including brain slices, eye slices, lung slices, heart slices, liver slices, kidney slices, spine slices, arm slices, vastus slices, and foot slices) were selected to extract the 3D volume organ models. The experimental results show that the extracted 3D organs were acceptable and satisfactory. This method may provide technical support for medical and other scientific research fields.
Collapse
Affiliation(s)
- Bin Liu
- International School of Information Science & Engineering (DUT-RUISE), Dalian University of Technology, Dalian 116024, China.,Key Lab of Ubiquitous Network and Service Software of Liaoning Province, Dalian University of Technology, Dalian 116024, China
| | - Xiaohui Zhang
- International School of Information Science & Engineering (DUT-RUISE), Dalian University of Technology, Dalian 116024, China
| | - Liang Yang
- The Second Hospital of Dalian Medical University, Dalian Medical University, Dalian 116044, China
| | - Jianxin Zhang
- Key Lab of Advanced Design and Intelligent Computing, Ministry of Education, Dalian University, Dalian 116622, China.,School of Computer Science and Engineering, Dalian Minzu University, Dalian 116600, China
| |
Collapse
|
34
|
Cherukuri V, G VKB, Bala R, Monga V. Deep Retinal Image Segmentation with Regularization Under Geometric Priors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2552-2567. [PMID: 31613766 DOI: 10.1109/tip.2019.2946078] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Vessel segmentation of retinal images is a key diagnostic capability in ophthalmology. This problem faces several challenges including low contrast, variable vessel size and thickness, and presence of interfering pathology such as micro-aneurysms and hemorrhages. Early approaches addressing this problem employed hand-crafted filters to capture vessel structures, accompanied by morphological post-processing. More recently, deep learning techniques have been employed with significantly enhanced segmentation accuracy. We propose a novel domain enriched deep network that consists of two components: 1) a representation network that learns geometric features specific to retinal images, and 2) a custom designed computationally efficient residual task network that utilizes the features obtained from the representation layer to perform pixel-level segmentation. The representation and task networks are jointly learned for any given training set. To obtain physically meaningful and practically effective representation filters, we propose two new constraints that are inspired by expected prior structure on these filters: 1) orientation constraint that promotes geometric diversity of curvilinear features, and 2) a data adaptive noise regularizer that penalizes false positives. Multi-scale extensions are developed to enable accurate detection of thin vessels. Experiments performed on three challenging benchmark databases under a variety of training scenarios show that the proposed prior guided deep network outperforms state of the art alternatives as measured by common evaluation metrics, while being more economical in network size and inference time.
Collapse
|
35
|
Vessel-Net: Retinal Vessel Segmentation Under Multi-path Supervision. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32239-7_30] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
36
|
Retinal Blood Vessel Segmentation: A Semi-supervised Approach. PATTERN RECOGNITION AND IMAGE ANALYSIS 2019. [DOI: 10.1007/978-3-030-31321-0_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
37
|
A Semi-supervised Approach to Segment Retinal Blood Vessels in Color Fundus Photographs. Artif Intell Med 2019. [DOI: 10.1007/978-3-030-21642-9_44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|