1
|
Guo X, Hu J, Lu T, Li G, Xiao R. A novel vessel enhancement method based on Hessian matrix eigenvalues using multilayer perceptron. Biomed Mater Eng 2025; 36:83-97. [PMID: 39973240 DOI: 10.1177/09592989241296431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
BACKGROUND Vessel segmentation is a critical aspect of medical image processing, often involving vessel enhancement as a preprocessing step. Existing vessel enhancement methods based on eigenvalues of Hessian matrix face challenges such as inconsistent parameter settings and suboptimal enhancement effects across different datasets. OBJECTIVE This paper aims to introduce a novel vessel enhancement algorithm that overcomes the limitations of traditional methods by leveraging a multilayer perceptron to fit a vessel enhancement filter function using eigenvalues of Hessian matrix. The primary goal is to simplify parameter tuning while enhancing the effectiveness and generalizability of vessel enhancement. METHODS The proposed algorithm utilizes eigenvalues of Hessian matrix as input for training the multilayer perceptron-based vessel enhancement filter function. The diameter of the largest blood vessel in the dataset is the only parameter to be set. RESULTS Experiments were conducted on public datasets such as DRIVE, STARE, and IRCAD. Additionally, optimal parameter acquisition methods for traditional Frangi and Jerman filters are introduced and quantitatively compared with the novel approach. Performance metrics such as AUROC, AUPRC, and DSC show that the proposed algorithm outperforms traditional filters in enhancing vessel features. CONCLUSION The findings of this study highlight the superiority of the proposed vessel enhancement algorithm in comparison to traditional methods. By simplifying parameter settings, improving enhancement effects, and showcasing superior performance metrics, the algorithm offers a promising solution for enhancing vessel parts in medical image analysis applications.
Collapse
Affiliation(s)
- Xiaoyu Guo
- School of Computer Science and Technology, Zhoukou Normal University, Zhoukou, China
| | - Jiajun Hu
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Tong Lu
- Visual 3D Medical Science and Technology Development Co. Ltd, Beijing, China
| | - Guoyin Li
- School of Computer Science and Technology, Zhoukou Normal University, Zhoukou, China
| | - Ruoxiu Xiao
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China
- Changzhou Weizhuo Shengda Medical Technology Development Co. Ltd, Changzhou, China
| |
Collapse
|
2
|
Awasthi A, Le N, Deng Z, Agrawal R, Wu CC, Van Nguyen H. Bridging human and machine intelligence: Reverse-engineering radiologist intentions for clinical trust and adoption. Comput Struct Biotechnol J 2024; 24:711-723. [PMID: 39660015 PMCID: PMC11629193 DOI: 10.1016/j.csbj.2024.11.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2024] [Revised: 10/19/2024] [Accepted: 11/04/2024] [Indexed: 12/12/2024] Open
Abstract
In the rapidly evolving landscape of medical imaging, the integration of artificial intelligence (AI) with clinical expertise offers unprecedented opportunities to enhance diagnostic precision and accuracy. Yet, the "black box" nature of AI models often limits their integration into clinical practice, where transparency and interpretability are important. This paper presents a novel system leveraging the Large Multimodal Model (LMM) to bridge the gap between AI predictions and the cognitive processes of radiologists. This system consists of two core modules, Temporally Grounded Intention Detection (TGID) and Region Extraction (RE). The TGID module predicts the radiologist's intentions by analyzing eye gaze fixation heatmap videos and corresponding radiology reports. Additionally, the RE module extracts regions of interest that align with these intentions, mirroring the radiologist's diagnostic focus. This approach introduces a new task, radiologist intention detection, and is the first application of Dense Video Captioning (DVC) in the medical domain. By making AI systems more interpretable and aligned with radiologist's cognitive processes, this proposed system aims to enhance trust, improve diagnostic accuracy, and support medical education. Additionally, it holds the potential for automated error correction, guiding junior radiologists, and fostering more effective training and feedback mechanisms. This work sets a precedent for future research in AI-driven healthcare, offering a pathway towards transparent, trustworthy, and human-centered AI systems. We evaluated this model using NLG(Natural Language Generation), time-related, and vision-based metrics, demonstrating superior performance in generating temporally grounded intentions on REFLACX and EGD-CXR datasets. This model also demonstrated strong predictive accuracy in overlap scores for medical abnormalities and effective region extraction with high IoU(Intersection over Union), especially in complex cases like cardiomegaly and edema. These results highlight the system's potential to enhance diagnostic accuracy and support continuous learning in radiology.
Collapse
Affiliation(s)
- Akash Awasthi
- Department of Electrical and Computer Engineering, University of Houston, United States
| | - Ngan Le
- Department of Computer Science & Computer Engineering, University of Arkansas, United States
| | - Zhigang Deng
- Department of Computer Science, University of Houston, Houston, TX, United States
| | - Rishi Agrawal
- Department of Thoracic Imaging, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Carol C. Wu
- Department of Thoracic Imaging, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Hien Van Nguyen
- Department of Electrical and Computer Engineering, University of Houston, United States
| |
Collapse
|
3
|
Mou L, Lin J, Zhao Y, Liu Y, Ma S, Zhang J, Lv W, Zhou T, Liu J, Frangi AF, Zhao Y. COSTA: A Multi-Center TOF-MRA Dataset and a Style Self-Consistency Network for Cerebrovascular Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:4442-4456. [PMID: 39012728 DOI: 10.1109/tmi.2024.3424976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/18/2024]
Abstract
Time-of-flight magnetic resonance angiography (TOF-MRA) is the least invasive and ionizing radiation-free approach for cerebrovascular imaging, but variations in imaging artifacts across different clinical centers and imaging vendors result in inter-site and inter-vendor heterogeneity, making its accurate and robust cerebrovascular segmentation challenging. Moreover, the limited availability and quality of annotated data pose further challenges for segmentation methods to generalize well to unseen datasets. In this paper, we construct the largest and most diverse TOF-MRA dataset (COSTA) from 8 individual imaging centers, with all the volumes manually annotated. Then we propose a novel network for cerebrovascular segmentation, namely CESAR, with the ability to tackle feature granularity and image style heterogeneity issues. Specifically, a coarse-to-fine architecture is implemented to refine cerebrovascular segmentation in an iterative manner. An automatic feature selection module is proposed to selectively fuse global long-range dependencies and local contextual information of cerebrovascular structures. A style self-consistency loss is then introduced to explicitly align diverse styles of TOF-MRA images to a standardized one. Extensive experimental results on the COSTA dataset demonstrate the effectiveness of our CESAR network against state-of-the-art methods. We have made 6 subsets of COSTA with the source code online available, in order to promote relevant research in the community.
Collapse
|
4
|
Yagis E, Aslani S, Jain Y, Zhou Y, Rahmani S, Brunet J, Bellier A, Werlein C, Ackermann M, Jonigk D, Tafforeau P, Lee PD, Walsh CL. Deep learning for 3D vascular segmentation in hierarchical phase contrast tomography: a case study on kidney. Sci Rep 2024; 14:27258. [PMID: 39516256 PMCID: PMC11549215 DOI: 10.1038/s41598-024-77582-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Accepted: 10/23/2024] [Indexed: 11/16/2024] Open
Abstract
Automated blood vessel segmentation is critical for biomedical image analysis, as vessel morphology changes are associated with numerous pathologies. Still, precise segmentation is difficult due to the complexity of vascular structures, anatomical variations across patients, the scarcity of annotated public datasets, and the quality of images. Our goal is to provide a foundation on the topic and identify a robust baseline model for application to vascular segmentation using a new imaging modality, Hierarchical Phase-Contrast Tomography (HiP-CT). We begin with an extensive review of current machine-learning approaches for vascular segmentation across various organs. Our work introduces a meticulously curated training dataset, verified by double annotators, consisting of vascular data from three kidneys imaged using HiP-CT as part of the Human Organ Atlas Project. HiP-CT pioneered at the European Synchrotron Radiation Facility in 2020, revolutionizes 3D organ imaging by offering a resolution of around 20 μm/voxel and enabling highly detailed localised zooms up to 1-2 μm/voxel without physical sectioning. We leverage the nnU-Net framework to evaluate model performance on this high-resolution dataset, using both known and novel samples, and implementing metrics tailored for vascular structures. Our comprehensive review and empirical analysis on HiP-CT data sets a new standard for evaluating machine learning models in high-resolution organ imaging. Our three experiments yielded Dice similarity coefficient (DSC) scores of 0.9523, 0.9410, and 0.8585, respectively. Nevertheless, DSC primarily assesses voxel-to-voxel concordance, overlooking several crucial characteristics of the vessels and should not be the sole metric for deciding the performance of vascular segmentation. Our results show that while segmentations yielded reasonably high scores-such as centerline DSC ranging from 0.82 to 0.88, certain errors persisted. Specifically, large vessels that collapsed due to the lack of hydrostatic pressure (HiP-CT is an ex vivo technique) were segmented poorly. Moreover, decreased connectivity in finer vessels and higher segmentation errors at vessel boundaries were observed. Such errors, particularly in significant vessels, obstruct the understanding of the structures by interrupting vascular tree connectivity. Our study establishes the benchmark across various evaluation metrics, for vascular segmentation of HiP-CT imaging data, an imaging technology that has the potential to substantively shift our understanding of human vascular networks.
Collapse
Affiliation(s)
- Ekin Yagis
- Department of Mechanical Engineering, University College London, London, UK.
| | - Shahab Aslani
- Department of Mechanical Engineering, University College London, London, UK
- Centre for Medical Image Computing, University College London, London, UK
| | - Yashvardhan Jain
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, USA
| | - Yang Zhou
- Department of Mechanical Engineering, University College London, London, UK
| | - Shahrokh Rahmani
- Department of Mechanical Engineering, University College London, London, UK
- National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, UK
| | - Joseph Brunet
- Department of Mechanical Engineering, University College London, London, UK
- European Synchrotron Radiation Facility, Grenoble, France
| | | | - Christopher Werlein
- Institute of Pathology, Hannover Medical School, Carl-Neuberg-Straße 1, 30625, Hannover, Germany
| | - Maximilian Ackermann
- Institute of Functional and Clinical Anatomy, University Medical Center of the Johannes Gutenberg-University Mainz, Mainz, Germany
| | - Danny Jonigk
- Institute of Pathology, RWTH Aachen University, Pauwelsstrasse 30, 52074, Aachen, Germany
| | - Paul Tafforeau
- European Synchrotron Radiation Facility, Grenoble, France
| | - Peter D Lee
- Department of Mechanical Engineering, University College London, London, UK
| | - Claire L Walsh
- Department of Mechanical Engineering, University College London, London, UK
| |
Collapse
|
5
|
Yang L, Yao S, Chen P, Shen M, Fu S, Xing J, Xue Y, Chen X, Wen X, Zhao Y, Li W, Ma H, Li S, Tuchin VV, Zhao Q. Unpaired fundus image enhancement based on constrained generative adversarial networks. JOURNAL OF BIOPHOTONICS 2024:e202400168. [PMID: 38962821 DOI: 10.1002/jbio.202400168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 05/11/2024] [Accepted: 05/29/2024] [Indexed: 07/05/2024]
Abstract
Fundus photography (FP) is a crucial technique for diagnosing the progression of ocular and systemic diseases in clinical studies, with wide applications in early clinical screening and diagnosis. However, due to the nonuniform illumination and imbalanced intensity caused by various reasons, the quality of fundus images is often severely weakened, brings challenges for automated screening, analysis, and diagnosis of diseases. To resolve this problem, we developed strongly constrained generative adversarial networks (SCGAN). The results demonstrate that the quality of various datasets were more significantly enhanced based on SCGAN, simultaneously more effectively retaining tissue and vascular information under various experimental conditions. Furthermore, the clinical effectiveness and robustness of this model were validated by showing its improved ability in vascular segmentation as well as disease diagnosis. Our study provides a new comprehensive approach for FP and also possesses the potential capacity to advance artificial intelligence-assisted ophthalmic examination.
Collapse
Affiliation(s)
- Luyao Yang
- School of Pen-Tung Sah Institute of Micro-Nano Science and Technology, State Key Laboratory of Vaccines for Infectious Diseases, Xiang An Biomedicine Laboratory, School of Public Health, Xiamen University, Xiamen, China
| | - Shenglan Yao
- School of Pen-Tung Sah Institute of Micro-Nano Science and Technology, State Key Laboratory of Vaccines for Infectious Diseases, Xiang An Biomedicine Laboratory, School of Public Health, Xiamen University, Xiamen, China
| | - Pengyu Chen
- School of Pen-Tung Sah Institute of Micro-Nano Science and Technology, State Key Laboratory of Vaccines for Infectious Diseases, Xiang An Biomedicine Laboratory, School of Public Health, Xiamen University, Xiamen, China
| | - Mei Shen
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Suzhong Fu
- School of Pen-Tung Sah Institute of Micro-Nano Science and Technology, State Key Laboratory of Vaccines for Infectious Diseases, Xiang An Biomedicine Laboratory, School of Public Health, Xiamen University, Xiamen, China
| | - Jiwei Xing
- School of Pen-Tung Sah Institute of Micro-Nano Science and Technology, State Key Laboratory of Vaccines for Infectious Diseases, Xiang An Biomedicine Laboratory, School of Public Health, Xiamen University, Xiamen, China
| | - Yuxin Xue
- School of Pen-Tung Sah Institute of Micro-Nano Science and Technology, State Key Laboratory of Vaccines for Infectious Diseases, Xiang An Biomedicine Laboratory, School of Public Health, Xiamen University, Xiamen, China
| | - Xin Chen
- Department of Orthopedics and Traumatology of Traditional Chinese Medicine, Xiamen Third Hospital, Xiamen, China
| | - Xiaofei Wen
- Department of Interventional Radiology, The First Affiliated Hospital of Xiamen University, Xiamen, China
| | - Yang Zhao
- School of Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University, Xiamen, China
| | - Wei Li
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Heng Ma
- Department of Physiology and Pathophysiology, School of Basic Medical Sciences, Fourth Military Medical University, Xian, China
| | - Shiying Li
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Valery V Tuchin
- Institute of Physics and Science Medical Center, Saratov State University, Saratov, Russia
| | - Qingliang Zhao
- School of Pen-Tung Sah Institute of Micro-Nano Science and Technology, State Key Laboratory of Vaccines for Infectious Diseases, Xiang An Biomedicine Laboratory, School of Public Health, Xiamen University, Xiamen, China
- Shenzhen Research Institute of Xiamen University, Shenzhen, China
| |
Collapse
|
6
|
Wang Y, Li H. A Novel Single-Sample Retinal Vessel Segmentation Method Based on Grey Relational Analysis. SENSORS (BASEL, SWITZERLAND) 2024; 24:4326. [PMID: 39001106 PMCID: PMC11244310 DOI: 10.3390/s24134326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 06/23/2024] [Accepted: 07/02/2024] [Indexed: 07/16/2024]
Abstract
Accurate segmentation of retinal vessels is of great significance for computer-aided diagnosis and treatment of many diseases. Due to the limited number of retinal vessel samples and the scarcity of labeled samples, and since grey theory excels in handling problems of "few data, poor information", this paper proposes a novel grey relational-based method for retinal vessel segmentation. Firstly, a noise-adaptive discrimination filtering algorithm based on grey relational analysis (NADF-GRA) is designed to enhance the image. Secondly, a threshold segmentation model based on grey relational analysis (TS-GRA) is designed to segment the enhanced vessel image. Finally, a post-processing stage involving hole filling and removal of isolated pixels is applied to obtain the final segmentation output. The performance of the proposed method is evaluated using multiple different measurement metrics on publicly available digital retinal DRIVE, STARE and HRF datasets. Experimental analysis showed that the average accuracy and specificity on the DRIVE dataset were 96.03% and 98.51%. The mean accuracy and specificity on the STARE dataset were 95.46% and 97.85%. Precision, F1-score, and Jaccard index on the HRF dataset all demonstrated high-performance levels. The method proposed in this paper is superior to the current mainstream methods.
Collapse
Affiliation(s)
- Yating Wang
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Hongjun Li
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| |
Collapse
|
7
|
Yang C, Zhang H, Chi D, Li Y, Xiao Q, Bai Y, Li Z, Li H, Li H. Contour attention network for cerebrovascular segmentation from TOF-MRA volumetric images. Med Phys 2024; 51:2020-2031. [PMID: 37672343 DOI: 10.1002/mp.16720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 06/25/2023] [Accepted: 07/20/2023] [Indexed: 09/08/2023] Open
Abstract
BACKGROUND Cerebrovascular segmentation is a crucial step in the computer-assisted diagnosis of cerebrovascular pathologies. However, accurate extraction of cerebral vessels from time-of-flight magnetic resonance angiography (TOF-MRA) data is still challenging due to the complex topology and slender shape. PURPOSE The existing deep learning-based approaches pay more attention to the skeleton and ignore the contour, which limits the segmentation performance of the cerebrovascular structure. We aim to weight the contour of brain vessels in shallow features when concatenating with deep features. It helps to obtain more accurate cerebrovascular details and narrows the semantic gap between multilevel features. METHODS This work proposes a novel framework for priority extraction of contours in cerebrovascular structures. We first design a neighborhood-based algorithm to generate the ground truth of the cerebrovascular contour from original annotations, which can introduce useful shape information for the segmentation network. Moreover, We propose an encoder-dual decoder-based contour attention network (CA-Net), which consists of the dilated asymmetry convolution block (DACB) and the Contour Attention Module (CAM). The ancillary decoder uses the DACB to obtain cerebrovascular contour features under the supervision of contour annotations. The CAM transforms these features into a spatial attention map to increase the weight of the contour voxels in main decoder to better restored the vessel contour details. RESULTS The CA-Net is thoroughly validated using two publicly available datasets, and the experimental results demonstrate that our network outperforms the competitors for cerebrovascular segmentation. We achieved the average dice similarity coefficient (D S C $DSC$ ) of 68.15 and 99.92% in natural and synthetic datasets. Our method segments cerebrovascular structures with better completeness. CONCLUSIONS We propose a new framework containing contour annotation generation and cerebrovascular segmentation network that better captures the tiny vessels and improve vessel connectivity.
Collapse
Affiliation(s)
- Chaozhi Yang
- College of Computer Science and Technology, China University of Petroleum (EastChina), Qingdao, China
| | | | - Dianwei Chi
- School of Artificial Intelligence, Yantai Institute of Technology, Yantai, China
| | - Yachuan Li
- College of Computer Science and Technology, China University of Petroleum (EastChina), Qingdao, China
| | - Qian Xiao
- College of Computer Science and Technology, China University of Petroleum (EastChina), Qingdao, China
| | - Yun Bai
- College of Computer Science and Technology, China University of Petroleum (EastChina), Qingdao, China
| | - Zongmin Li
- College of Computer Science and Technology, China University of Petroleum (EastChina), Qingdao, China
- Shengli College of China University of Petroleum, Dongying, China
| | - Hongyi Li
- Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Science, Beijing, China
| | - Hua Li
- Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
8
|
Zhang C, Zhao M, Xie Y, Ding R, Ma M, Guo K, Jiang H, Xi W, Xia L. TL-MSE 2-Net: Transfer learning based nested model for cerebrovascular segmentation with aneurysms. Comput Biol Med 2023; 167:107609. [PMID: 37883854 DOI: 10.1016/j.compbiomed.2023.107609] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 10/11/2023] [Accepted: 10/17/2023] [Indexed: 10/28/2023]
Abstract
Cerebrovascular (i.e., cerebral vessel) segmentation is essential for diagnosing and treating brain diseases. Convolutional neural network models, such as U-Net, are commonly used for this purpose. Unfortunately, such models may not be entirely satisfactory in dealing with cerebrovascular segmentation with tumors due to the following issues: (1) Relatively small number of clinical datasets from patients obtained through different modalities such as computed tomography (CT) and magnetic resonance imaging (MRI), leading to inadequate training and lack of transferability in the modeling; (2) Insufficient feature extraction caused by less attention to both convolution sizes and cerebral vessel edges. Inspired by the existence of similar features on cerebral vessels between normal subjects and patients, we propose a transfer learning strategy based on a pre-trained nested model called TL-MSE2-Net. This model uses one of the publicly available datasets for cerebrovascular segmentation with aneurysms. To address issue (1), our transfer learning strategy leverages a pre-trained model that uses a large number of datasets from normal subjects, providing a potential solution to the lack of sufficient clinical datasets. To tackle issue (2), we structure the pre-trained model based on 3D U-Net, comprising three blocks: ResMul, DeRes, and REAM. The ResMul and DeRes blocks enhance feature extraction by utilizing multiple convolution sizes to capture multiscale features, and the REAM block increases the weight of the voxels on the edges of the given 3D volume. We evaluated the proposed model on one small private clinical dataset and two publicly available datasets. The experimental results demonstrated that our MSE2-Net framework achieved an average Dice score of 70.81 % and 89.08 % on the two publicly available datasets, outperforming other state-of-the-art methods. Ablation studies were also conducted to validate the effectiveness of each block. The proposed TL-MSE2-Net yielded better results than MSE2-Net on a small private clinical dataset, with increases of 5.52 %, 3.37 %, 6.71 %, and 0.85 % for the Dice score, sensitivity, Jaccard index, and precision, respectively.
Collapse
Affiliation(s)
- Chaoran Zhang
- Laboratory of Neural Computing and Intelligent Perception (NCIP), Capital Normal University, Beijing, 100048, China
| | - Ming Zhao
- Department of Neurosurgery, First Medical Center, Chinese PLA General Hospital, Beijing, 100853, China
| | - Yixuan Xie
- Laboratory of Neural Computing and Intelligent Perception (NCIP), Capital Normal University, Beijing, 100048, China
| | - Rui Ding
- Laboratory of Neural Computing and Intelligent Perception (NCIP), Capital Normal University, Beijing, 100048, China
| | - Ming Ma
- Department of Computer Science, Winona State University, Winona, MN, 55987, USA
| | - Kaiwen Guo
- Laboratory of Neural Computing and Intelligent Perception (NCIP), Capital Normal University, Beijing, 100048, China
| | - Hongzhen Jiang
- Department of Neurosurgery, First Medical Center, Chinese PLA General Hospital, Beijing, 100853, China
| | - Wei Xi
- Department of Radiology, Fourth Medical Center, Chinese PLA General Hospital, Beijing, 100048, China
| | - Likun Xia
- Laboratory of Neural Computing and Intelligent Perception (NCIP), Capital Normal University, Beijing, 100048, China.
| |
Collapse
|
9
|
Lin L, Peng L, He H, Cheng P, Wu J, Wong KKY, Tang X. YoloCurvSeg: You only label one noisy skeleton for vessel-style curvilinear structure segmentation. Med Image Anal 2023; 90:102937. [PMID: 37672901 DOI: 10.1016/j.media.2023.102937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 06/30/2023] [Accepted: 08/16/2023] [Indexed: 09/08/2023]
Abstract
Weakly-supervised learning (WSL) has been proposed to alleviate the conflict between data annotation cost and model performance through employing sparsely-grained (i.e., point-, box-, scribble-wise) supervision and has shown promising performance, particularly in the image segmentation field. However, it is still a very challenging task due to the limited supervision, especially when only a small number of labeled samples are available. Additionally, almost all existing WSL segmentation methods are designed for star-convex structures which are very different from curvilinear structures such as vessels and nerves. In this paper, we propose a novel sparsely annotated segmentation framework for curvilinear structures, named YoloCurvSeg. A very essential component of YoloCurvSeg is image synthesis. Specifically, a background generator delivers image backgrounds that closely match the real distributions through inpainting dilated skeletons. The extracted backgrounds are then combined with randomly emulated curves generated by a Space Colonization Algorithm-based foreground generator and through a multilayer patch-wise contrastive learning synthesizer. In this way, a synthetic dataset with both images and curve segmentation labels is obtained, at the cost of only one or a few noisy skeleton annotations. Finally, a segmenter is trained with the generated dataset and possibly an unlabeled dataset. The proposed YoloCurvSeg is evaluated on four publicly available datasets (OCTA500, CORN, DRIVE and CHASEDB1) and the results show that YoloCurvSeg outperforms state-of-the-art WSL segmentation methods by large margins. With only one noisy skeleton annotation (respectively 0.14%, 0.03%, 1.40%, and 0.65% of the full annotation), YoloCurvSeg achieves more than 97% of the fully-supervised performance on each dataset. Code and datasets will be released at https://github.com/llmir/YoloCurvSeg.
Collapse
Affiliation(s)
- Li Lin
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen, China; Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China; Jiaxing Research Institute, Southern University of Science and Technology, Jiaxing, China
| | - Linkai Peng
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Huaqing He
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen, China; Jiaxing Research Institute, Southern University of Science and Technology, Jiaxing, China
| | - Pujin Cheng
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen, China; Jiaxing Research Institute, Southern University of Science and Technology, Jiaxing, China
| | - Jiewei Wu
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Kenneth K Y Wong
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
| | - Xiaoying Tang
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen, China; Jiaxing Research Institute, Southern University of Science and Technology, Jiaxing, China.
| |
Collapse
|
10
|
Li P, Qiu Z, Zhan Y, Chen H, Yuan S. Multi-scale Bottleneck Residual Network for Retinal Vessel Segmentation. J Med Syst 2023; 47:102. [PMID: 37776409 DOI: 10.1007/s10916-023-01992-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 08/30/2023] [Indexed: 10/02/2023]
Abstract
Precise segmentation of retinal vessels is crucial for the prevention and diagnosis of ophthalmic diseases. In recent years, deep learning has shown outstanding performance in retinal vessel segmentation. Many scholars are dedicated to studying retinal vessel segmentation methods based on color fundus images, but the amount of research works on Scanning Laser Ophthalmoscopy (SLO) images is very scarce. In addition, existing SLO image segmentation methods still have difficulty in balancing accuracy and model parameters. This paper proposes a SLO image segmentation model based on lightweight U-Net architecture called MBRNet, which solves the problems in the current research through Multi-scale Bottleneck Residual (MBR) module and attention mechanism. Concretely speaking, the MBR module expands the receptive field of the model at a relatively low computational cost and retains more detailed information. Attention Gate (AG) module alleviates the disturbance of noise so that the network can concentrate on vascular characteristics. Experimental results on two public SLO datasets demonstrate that by comparison to existing methods, the MBRNet has better segmentation performance with relatively few parameters.
Collapse
Affiliation(s)
- Peipei Li
- School of Computer Science and Technology, Hainan University, Haikou, 570228, China
| | - Zhao Qiu
- School of Computer Science and Technology, Hainan University, Haikou, 570228, China.
| | - Yuefu Zhan
- Affiliated maternal and child health hospital (Children's hospital) of Hainan medical university/Hainan Women and Children's Medical Center, Haikou, 570312, China.
| | - Huajing Chen
- Hainan Provincial Public Security Department, Haikou, 570203, China
| | - Sheng Yuan
- School of Computer Science and Technology, Hainan University, Haikou, 570228, China
| |
Collapse
|
11
|
Huang X, Deng Z, Li D, Yuan X, Fu Y. MISSFormer: An Effective Transformer for 2D Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1484-1494. [PMID: 37015444 DOI: 10.1109/tmi.2022.3230943] [Citation(s) in RCA: 49] [Impact Index Per Article: 24.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Transformer-based methods are recently popular in vision tasks because of their capability to model global dependencies alone. However, it limits the performance of networks due to the lack of modeling local context and global-local correlations of multi-scale features. In this paper, we present MISSFormer, a Medical Image Segmentation tranSFormer. MISSFormer is a hierarchical encoder-decoder network with two appealing designs: 1) a feed-forward network in transformer block of U-shaped encoder-decoder structure is redesigned, ReMix-FFN, which explore global dependencies and local context for better feature discrimination by re-integrating the local context and global dependencies; 2) a ReMixed Transformer Context Bridge is proposed to extract the correlations of global dependencies and local context in multi-scale features generated by our hierarchical transformer encoder. The MISSFormer shows a solid capacity to capture more discriminative dependencies and context in medical image segmentation. The experiments on multi-organ, cardiac segmentation and retinal vessel segmentation tasks demonstrate the superiority, effectiveness and robustness of our MISSFormer. Specifically, the experimental results of MISSFormer trained from scratch even outperform state-of-the-art methods pre-trained on ImageNet, and the core designs can be generalized to other visual segmentation tasks. The code has been released on Github: https://github.com/ZhifangDeng/MISSFormer.
Collapse
|
12
|
Sun K, Chen Y, Chao Y, Geng J, Chen Y. A retinal vessel segmentation method based improved U-Net model. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
13
|
Challoob M, Gao Y, Busch A, Nikzad M. Separable Paravector Orientation Tensors for Enhancing Retinal Vessels. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:880-893. [PMID: 36331638 DOI: 10.1109/tmi.2022.3219436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Robust detection of retinal vessels remains an unsolved research problem, particularly in handling the intrinsic real-world challenges of highly imbalanced contrast between thick vessels and thin ones, inhomogeneous background regions, uneven illumination, and complex geometries of crossing/bifurcations. This paper presents a new separable paravector orientation tensor that addresses these difficulties by characterizing the enhancement of retinal vessels to be dependent on a nonlinear scale representation, invariant to changes in contrast and lighting, responsive for symmetric patterns, and fitted with elliptical cross-sections. The proposed method is built on projecting vessels as a 3D paravector valued function rotated in an alpha quarter domain, providing geometrical, structural, symmetric, and energetic features. We introduce an innovative symmetrical inhibitory scheme that incorporates paravector features for producing a set of directional contrast-independent elongated-like patterns reconstructing vessel tree in orientation tensors. By fitting constraint elliptical volumes via eigensystem analysis, the final vessel tree is produced with a strong and uniform response preserving various vessel features. The validation of proposed method on clinically relevant retinal images with high-quality results, shows its excellent performance compared to the state-of-the-art benchmarks and the second human observers.
Collapse
|
14
|
Xu GX, Ren CX. SPNet: A novel deep neural network for retinal vessel segmentation based on shared decoder and pyramid-like loss. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.12.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
15
|
3D vessel-like structure segmentation in medical images by an edge-reinforced network. Med Image Anal 2022; 82:102581. [DOI: 10.1016/j.media.2022.102581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 05/04/2022] [Accepted: 08/11/2022] [Indexed: 11/15/2022]
|
16
|
Yu X, Ge C, Aziz MZ, Li M, Shum PP, Liu L, Mo J. CGNet-assisted Automatic Vessel Segmentation for Optical Coherence Tomography Angiography. JOURNAL OF BIOPHOTONICS 2022; 15:e202200067. [PMID: 35704010 DOI: 10.1002/jbio.202200067] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 05/01/2022] [Accepted: 05/24/2022] [Indexed: 06/15/2023]
Abstract
Automatic optical coherence tomography angiography (OCTA) vessel segmentation is of great significance to retinal disease diagnoses. Due to the complex vascular structure, however, various existing factors make the segmentation task challenging. This paper reports a novel end-to-end three-stage channel and position attention (CPA) module integrated graph reasoning convolutional neural network (CGNet) for retinal OCTA vessel segmentation. Specifically, in the coarse stage, both CPA and graph reasoning network (GRN) modules are integrated in between a U-shaped neural network encoder and decoder to acquire vessel confidence maps. After being directed into a fine stage, such confidence maps are concatenated with the original image and the generated fine image map as a 3-channel image to refine retinal micro-vasculatures. Finally, both the fine and refined images are fused at the refining stage as the segmentation results. Experiments with different public datasets are conducted to verify the efficacy of the proposed CGNet. Results show that by employing the end-to-end training scheme and the integrated CPA and GRN modules, CGNet achieves 94.29% and 85.62% in area under the ROC curve (AUC) for the two different datasets, outperforming the state-of-the-art existing methods with both improved operability and reduced complexity in different cases. Code is available at https://github.com/GE-123-cpu/CGnet-for-vessel-segmentation.
Collapse
Affiliation(s)
- Xiaojun Yu
- School of Automation, Northwestern Polytechnical University, Xi'an, China
- Shenzhen Research Institute of Northwestern Polytechnical University, Shenzhen, Guangdong, China
| | - Chenkun Ge
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | | | - Mingshuai Li
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Perry Ping Shum
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Linbo Liu
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Jianhua Mo
- School of Electronics and Information Engineering, Soochow University, Suzhou, China
| |
Collapse
|
17
|
Semi-supervised region-connectivity-based cerebrovascular segmentation for time-of-flight magnetic resonance angiography image. Comput Biol Med 2022; 149:105972. [DOI: 10.1016/j.compbiomed.2022.105972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 07/24/2022] [Accepted: 08/13/2022] [Indexed: 11/18/2022]
|
18
|
A Hybrid Fusion Method Combining Spatial Image Filtering with Parallel Channel Network for Retinal Vessel Segmentation. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-022-07311-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
19
|
Guo S. CSGNet: Cascade semantic guided net for retinal vessel segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
20
|
Generative adversarial network based cerebrovascular segmentation for time-of-flight magnetic resonance angiography image. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.11.075] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
21
|
Yang D, Zhao H, Han T. Learning feature-rich integrated comprehensive context networks for automated fundus retinal vessel analysis. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
22
|
Li Y, Ren T, Li J, Li X, Li A. Multi-perspective label based deep learning framework for cerebral vasculature segmentation in whole-brain fluorescence images. BIOMEDICAL OPTICS EXPRESS 2022; 13:3657-3671. [PMID: 35781963 PMCID: PMC9208593 DOI: 10.1364/boe.458111] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 04/23/2022] [Accepted: 05/22/2022] [Indexed: 06/15/2023]
Abstract
The popularity of fluorescent labelling and mesoscopic optical imaging techniques enable the acquisition of whole mammalian brain vasculature images at capillary resolution. Segmentation of the cerebrovascular network is essential for analyzing the cerebrovascular structure and revealing the pathogenesis of brain diseases. Existing deep learning methods use a single type of annotated labels with the same pixel weight to train the neural network and segment vessels. Due to the variation in the shape, density and brightness of vessels in whole-brain fluorescence images, it is difficult for a neural network trained with a single type of label to segment all vessels accurately. To address this problem, we proposed a deep learning cerebral vasculature segmentation framework based on multi-perspective labels. First, the pixels in the central region of thick vessels and the skeleton region of vessels were extracted separately using morphological operations based on the binary annotated labels to generate two different labels. Then, we designed a three-stage 3D convolutional neural network containing three sub-networks, namely thick-vessel enhancement network, vessel skeleton enhancement network and multi-channel fusion segmentation network. The first two sub-networks were trained by the two labels generated in the previous step, respectively, and pre-segmented the vessels. The third sub-network was responsible for fusing the pre-segmented results to precisely segment the vessels. We validated our method on two mouse cerebral vascular datasets generated by different fluorescence imaging modalities. The results showed that our method outperforms the state-of-the-art methods, and the proposed method can be applied to segment the vasculature on large-scale volumes.
Collapse
Affiliation(s)
- Yuxin Li
- Shaanxi Key Laboratory of Network Computing and Security Technology, School of Computer Science and Engineering, Xi’an University of Technology, Xi’an, 710048, China
| | - Tong Ren
- Shaanxi Key Laboratory of Network Computing and Security Technology, School of Computer Science and Engineering, Xi’an University of Technology, Xi’an, 710048, China
| | - Junhuai Li
- Shaanxi Key Laboratory of Network Computing and Security Technology, School of Computer Science and Engineering, Xi’an University of Technology, Xi’an, 710048, China
| | - Xiangning Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, China
- HUST-Suzhou Institute for Brainsmatics, Suzhou, 215123, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, 430074, China
- HUST-Suzhou Institute for Brainsmatics, Suzhou, 215123, China
| |
Collapse
|
23
|
Shen X, Xu J, Jia H, Fan P, Dong F, Yu B, Ren S. Self-attentional microvessel segmentation via squeeze-excitation transformer Unet. Comput Med Imaging Graph 2022; 97:102055. [DOI: 10.1016/j.compmedimag.2022.102055] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 02/17/2022] [Accepted: 03/12/2022] [Indexed: 11/27/2022]
|
24
|
Lin G, Bai H, Zhao J, Yun Z, Chen Y, Pang S, Feng Q. Improving sensitivity and connectivity of retinal vessel segmentation via error discrimination network. Med Phys 2022; 49:4494-4507. [PMID: 35338781 DOI: 10.1002/mp.15627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 03/04/2022] [Accepted: 03/08/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Automated retinal vessel segmentation is crucial to the early diagnosis and treatment of ophthalmological diseases. Many deep learning-based methods have shown exceptional success in this task. However, current approaches are still inadequate in challenging vessels (e.g., thin vessels) and rarely focus on the connectivity of vessel segmentation. METHODS We propose using an error discrimination network (D) to distinguish whether the vessel pixel predictions of the segmentation network (S) are correct, and S is trained to obtain fewer error predictions of D. Our method is similar to, but not the same as, the generative adversarial network (GAN). Three types of vessel samples and corresponding error masks are used to train D, as follows: (1) vessel ground truth; (2) vessel segmented by S; (3) artificial thin vessel error samples that further improve the sensitivity of D to wrong small vessels. As an auxiliary loss function of S, D strengthens the supervision of difficult vessels. Optionally, we can use the errors predicted by D to correct the segmentation result of S. RESULTS Compared with state-of-the-art methods, our method achieves the highest scores in sensitivity (86.19%, 86.26%, and 86.53%) and G-Mean (91.94%, 91.30%, and 92.76%) on three public datasets, namely, STARE, DRIVE, and HRF. Our method also maintains a competitive level in other metrics. On the STARE dataset, the F1-score and AUC of our method rank second and first, respectively, reaching 84.51% and 98.97%. The top scores of the three topology-relevant metrics (Conn, Inf, and Cor) demonstrate that the vessels extracted by our method have excellent connectivity. We also validate the effectiveness of error discrimination supervision and artificial error sample training through ablation experiments. CONCLUSIONS The proposed method provides an accurate and robust solution for difficult vessel segmentation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Guoye Lin
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Hanhua Bai
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Jie Zhao
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China.,School of Medical Information Engineering, Guangdong Pharmaceutical University, Guangzhou, Guangdong, China
| | - Zhaoqiang Yun
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Yangfan Chen
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Shumao Pang
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Qianjin Feng
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| |
Collapse
|
25
|
Fundus Retinal Vessels Image Segmentation Method Based on Improved U-Net. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2022.03.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
26
|
MSC-Net: Multitask Learning Network for Retinal Vessel Segmentation and Centerline Extraction. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app12010403] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Automatic segmentation and centerline extraction of blood vessels from retinal fundus images is an essential step to measure the state of retinal blood vessels and achieve the goal of auxiliary diagnosis. Combining the information of blood vessel segments and centerline can help improve the continuity of results and performance. However, previous studies have usually treated these two tasks as separate research topics. Therefore, we propose a novel multitask learning network (MSC-Net) for retinal vessel segmentation and centerline extraction. The network uses a multibranch design to combine information between two tasks. Channel and atrous spatial fusion block (CAS-FB) is designed to fuse and correct the features of different branches and different scales. The clDice loss function is also used to constrain the topological continuity of blood vessel segments and centerline. Experimental results on different fundus blood vessel datasets (DRIVE, STARE, and CHASE) show that our method can obtain better segmentation and centerline extraction results at different scales and has better topological continuity than state-of-the-art methods.
Collapse
|
27
|
Lin J, Mou L, Yan Q, Ma S, Yue X, Zhou S, Lin Z, Zhang J, Liu J, Zhao Y. Automated Segmentation of Trigeminal Nerve and Cerebrovasculature in MR-Angiography Images by Deep Learning. Front Neurosci 2021; 15:744967. [PMID: 34955711 PMCID: PMC8702731 DOI: 10.3389/fnins.2021.744967] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 11/17/2021] [Indexed: 11/29/2022] Open
Abstract
Trigeminal neuralgia caused by paroxysmal and severe pain in the distribution of the trigeminal nerve is a rare chronic pain disorder. It is generally accepted that compression of the trigeminal root entry zone by vascular structures is the major cause of primary trigeminal neuralgia, and vascular decompression is the prior choice in neurosurgical treatment. Therefore, accurate preoperative modeling/segmentation/visualization of trigeminal nerve and its surrounding cerebrovascular is important to surgical planning. In this paper, we propose an automated method to segment trigeminal nerve and its surrounding cerebrovascular in the root entry zone, and to further reconstruct and visual these anatomical structures in three-dimensional (3D) Magnetic Resonance Angiography (MRA). The proposed method contains a two-stage neural network. Firstly, a preliminary confidence map of different anatomical structures is produced by a coarse segmentation stage. Secondly, a refinement segmentation stage is proposed to refine and optimize the coarse segmentation map. To model the spatial and morphological relationship between trigeminal nerve and cerebrovascular structures, the proposed network detects the trigeminal nerve, cerebrovasculature, and brainstem simultaneously. The method has been evaluated on a dataset including 50 MRA volumes, and the experimental results show the state-of-the-art performance of the proposed method with an average Dice similarity coefficient, Hausdorff distance, and average surface distance error of 0.8645, 0.2414, and 0.4296 on multi-tissue segmentation, respectively.
Collapse
Affiliation(s)
- Jinghui Lin
- Department of Neurosurgery, Ningbo First Hospital, Ningbo, China
| | - Lei Mou
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Qifeng Yan
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Shaodong Ma
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Xingyu Yue
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Shengjun Zhou
- Department of Neurosurgery, Ningbo First Hospital, Ningbo, China
| | - Zhiqing Lin
- Department of Neurosurgery, Ningbo First Hospital, Ningbo, China
| | - Jiong Zhang
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Jiang Liu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Yitian Zhao
- The Affiliated People's Hospital of Ningbo University, Ningbo, China.,Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| |
Collapse
|
28
|
Ma Y, Liu J, Liu Y, Fu H, Hu Y, Cheng J, Qi H, Wu Y, Zhang J, Zhao Y. Structure and Illumination Constrained GAN for Medical Image Enhancement. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3955-3967. [PMID: 34339369 DOI: 10.1109/tmi.2021.3101937] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The development of medical imaging techniques has greatly supported clinical decision making. However, poor imaging quality, such as non-uniform illumination or imbalanced intensity, brings challenges for automated screening, analysis and diagnosis of diseases. Previously, bi-directional GANs (e.g., CycleGAN), have been proposed to improve the quality of input images without the requirement of paired images. However, these methods focus on global appearance, without imposing constraints on structure or illumination, which are essential features for medical image interpretation. In this paper, we propose a novel and versatile bi-directional GAN, named Structure and illumination constrained GAN (StillGAN), for medical image quality enhancement. Our StillGAN treats low- and high-quality images as two distinct domains, and introduces local structure and illumination constraints for learning both overall characteristics and local details. Extensive experiments on three medical image datasets (e.g., corneal confocal microscopy, retinal color fundus and endoscopy images) demonstrate that our method performs better than both conventional methods and other deep learning-based methods. In addition, we have investigated the impact of the proposed method on different medical image analysis and clinical tasks such as nerve segmentation, tortuosity grading, fovea localization and disease classification.
Collapse
|
29
|
Yang Q, Ma B, Cui H, Ma J. AMF-NET: Attention-aware Multi-scale Fusion Network for Retinal Vessel Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3277-3280. [PMID: 34891940 DOI: 10.1109/embc46164.2021.9630756] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Automatic retinal vessel segmentation in fundus image can assist effective and efficient diagnosis of retina disease. Microstructure estimation of capillaries is a prolonged challenging issue. To tackle this problem, we propose attention-aware multi-scale fusion network (AMF-Net). Our network is with dense convolutions to perceive microscopic capillaries. Additionally, multi-scale features are extracted and fused with adaptive weights by channel attention module to improve the segmentation performance. Finally, spatial attention is introduced by position attention modules to capture long-distance feature dependencies. The proposed model is evaluated using two public datasets including DRIVE and CHASE_DB1. Extensive experiments demonstrate that our model outperforms existing methods. Ablation study valid the effectiveness of the proposed components.
Collapse
|
30
|
Xia L, Xie Y, Wang Q, Zhang H, He C, Yang X, Lin J, Song R, Liu J, Zhao Y. A nested parallel multiscale convolution for cerebrovascular segmentation. Med Phys 2021; 48:7971-7983. [PMID: 34719042 DOI: 10.1002/mp.15280] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 09/12/2021] [Accepted: 09/26/2021] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Cerebrovascular segmentation in magnetic resonance imaging (MRI) plays an important role in the diagnosis and treatment of cerebrovascular diseases. Many segmentation frameworks based on convolutional neural networks (CNNs) or U-Net-like structures have been proposed for cerebrovascular segmentation. Unfortunately, the segmentation results are still unsatisfactory, particularly in the small/thin cerebrovascular due to the following reasons: (1) the lack of attention to multiscale features in encoder caused by the convolutions with single kernel size; (2) insufficient extraction of shallow and deep-seated features caused by the depth limitation of transmission path between encoder and decoder; (3) insufficient utilization of the extracted features in decoder caused by less attention to multiscale features. METHODS Inspired by U-Net++, we propose a novel 3D U-Net-like framework termed Usception for small cerebrovascular. It includes three blocks: Reduction block, Gap block, and Deep block, aiming to: (1) improve feature extraction ability by grouping different convolution sizes; (2) increase the number of multiscale features in different layers by grouping paths of different depths between encoder and decoder; (3) maximize the ability of decoder in recovering multiscale features from Reduction and Gap block by using convolutions with different kernel sizes. RESULTS The proposed framework is evaluated on three public and in-house clinical magnetic resonance angiography (MRA) data sets. The experimental results show that our framework reaches an average dice score of 69.29%, 87.40%, 77.77% on three data sets, which outperform existing state-of-the-art methods. We also validate the effectiveness of each block through ablation experiments. CONCLUSIONS By means of the combination of Inception-ResNet and dimension-expanded U-Net++, the proposed framework has demonstrated its capability to maximize multiscale feature extraction, thus achieving competitive segmentation results for small cerebrovascular.
Collapse
Affiliation(s)
- Likun Xia
- College of Information Engineering, Capital Normal University, Beijing, China.,International Science and Technology Cooperation Base of Electronic System Reliability and Mathematical Interdisciplinary, Capital Normal University, Beijing, China.,Laboratory of Neural Computing and Intelligent Perception, Capital Normal University, Beijing, China.,Beijing Advanced Innovation Center for Imaging Theory and Technology, Capital Normal University, Beijing, China
| | - Yixuan Xie
- College of Information Engineering, Capital Normal University, Beijing, China.,Laboratory of Neural Computing and Intelligent Perception, Capital Normal University, Beijing, China.,Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Qiwang Wang
- College of Information Engineering, Capital Normal University, Beijing, China
| | - Hao Zhang
- College of Information Engineering, Capital Normal University, Beijing, China.,Laboratory of Neural Computing and Intelligent Perception, Capital Normal University, Beijing, China.,Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Cheng He
- College of Information Engineering, Capital Normal University, Beijing, China.,Laboratory of Neural Computing and Intelligent Perception, Capital Normal University, Beijing, China
| | - Xiaonan Yang
- College of Information Engineering, Capital Normal University, Beijing, China.,Laboratory of Neural Computing and Intelligent Perception, Capital Normal University, Beijing, China
| | - Jinghui Lin
- Department of Neurosurgery, Ningbo First Hospital, Ningbo, China
| | - Ran Song
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Jiang Liu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| |
Collapse
|
31
|
Ding J, Zhang Z, Tang J, Guo F. A Multichannel Deep Neural Network for Retina Vessel Segmentation via a Fusion Mechanism. Front Bioeng Biotechnol 2021; 9:697915. [PMID: 34490220 PMCID: PMC8417313 DOI: 10.3389/fbioe.2021.697915] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 07/06/2021] [Indexed: 11/17/2022] Open
Abstract
Changes in fundus blood vessels reflect the occurrence of eye diseases, and from this, we can explore other physical diseases that cause fundus lesions, such as diabetes and hypertension complication. However, the existing computational methods lack high efficiency and precision segmentation for the vascular ends and thin retina vessels. It is important to construct a reliable and quantitative automatic diagnostic method for improving the diagnosis efficiency. In this study, we propose a multichannel deep neural network for retina vessel segmentation. First, we apply U-net on original and thin (or thick) vessels for multi-objective optimization for purposively training thick and thin vessels. Then, we design a specific fusion mechanism for combining three kinds of prediction probability maps into a final binary segmentation map. Experiments show that our method can effectively improve the segmentation performances of thin blood vessels and vascular ends. It outperforms many current excellent vessel segmentation methods on three public datasets. In particular, it is pretty impressive that we achieve the best F1-score of 0.8247 on the DRIVE dataset and 0.8239 on the STARE dataset. The findings of this study have the potential for the application in an automated retinal image analysis, and it may provide a new, general, and high-performance computing framework for image segmentation.
Collapse
Affiliation(s)
- Jiaqi Ding
- School of Computer Science and Technology, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Zehua Zhang
- School of Computer Science and Technology, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Jijun Tang
- School of Computer Science and Technology, College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Fei Guo
- School of Computer Science and Engineering, Central South University, Changsha, China
| |
Collapse
|
32
|
Chen L, Tang C, Huang ZH, Xu M, Lei Z. Contrast enhancement and speckle suppression in OCT images based on a selective weighted variational enhancement model and an SP-FOOPDE algorithm. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2021; 38:973-984. [PMID: 34263753 DOI: 10.1364/josaa.422047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 05/24/2021] [Indexed: 06/13/2023]
Abstract
Simultaneous contrast enhancement and speckle suppression in optical coherence tomography (OCT) are of great significance to medical diagnosis. In this paper, we propose a selective weighted variational enhancement (SWVE) model to enhance the structural parts of OCT images, and then present a shape-preserving fourth-order-oriented partial differential equations (SP-FOOPDE) algorithm to suppress speckle noise. To be specific, in the SWVE model, we first introduce the fast and robust fuzzy c-means clustering (FRFCM) algorithm to generate masks based on the gray-level histograms of the reconstructed OCT images and utilize the masks to distinguish the structural parts from the background. Then the retinex-based weighted variational model, combined with gamma correction, is adopted to enhance the structural parts by multiplying the estimated reflectance with the adjusted illumination. In the despeckling process, we present an SP-FOOPDE algorithm with the fidelity term modified by the shearlet transform to strike a splendid balance between noise suppression and structural preservation. Experimental results show that the proposed method performs well in contrast enhancement and speckle suppression, with better quality metrics of the MSE, PSNR, CNR, ENL, EKI, and ν and better noise immunity than the related method. Moreover, the application to the segmentation preprocessing exhibits that the retinal structure of the OCT images processed by the proposed method can be completely segmented.
Collapse
|
33
|
Dharmawan DA. Assessing fairness in performance evaluation of publicly available retinal blood vessel segmentation algorithms. J Med Eng Technol 2021; 45:351-360. [PMID: 33843422 DOI: 10.1080/03091902.2021.1906342] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
In the literature, various algorithms have been proposed for automatically extracting blood vessels from retinal images. In general, they are developed and evaluated using several publicly available datasets such as the DRIVE and STARE datasets. For performance evaluation, several metrics such as Sensitivity, Specificity, and Accuracy have been widely used. However, not all methods in the literature have been fairly evaluated and compared among their counterparts. In particular, for some publicly available algorithms, the performance is measured only for the area inside the field of view (FOV) of each retinal image while the rest use the complete image for the performance evaluation. Therefore, performing a comparison of the performance of methods in the latter group with those in the former group may lead to a misleading justification. This study aims to assess fairness in the performance evaluation of various publicly available retinal blood vessel segmentation algorithms. The conducted study allows getting several meaningful results: (i) a guideline to assess fairness in performance evaluation of retinal vessel segmentation algorithms, (ii) a more proper performance comparison of retinal vessel segmentation algorithms in the literature, and (iii) a suggestion regarding the use of performance evaluation metrics that will not lead to misleading comparison and justification.
Collapse
|
34
|
Ma Y, Hao H, Xie J, Fu H, Zhang J, Yang J, Wang Z, Liu J, Zheng Y, Zhao Y. ROSE: A Retinal OCT-Angiography Vessel Segmentation Dataset and New Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:928-939. [PMID: 33284751 DOI: 10.1109/tmi.2020.3042802] [Citation(s) in RCA: 104] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Optical Coherence Tomography Angiography (OCTA) is a non-invasive imaging technique that has been increasingly used to image the retinal vasculature at capillary level resolution. However, automated segmentation of retinal vessels in OCTA has been under-studied due to various challenges such as low capillary visibility and high vessel complexity, despite its significance in understanding many vision-related diseases. In addition, there is no publicly available OCTA dataset with manually graded vessels for training and validation of segmentation algorithms. To address these issues, for the first time in the field of retinal image analysis we construct a dedicated Retinal OCTA SEgmentation dataset (ROSE), which consists of 229 OCTA images with vessel annotations at either centerline-level or pixel level. This dataset with the source code has been released for public access to assist researchers in the community in undertaking research in related topics. Secondly, we introduce a novel split-based coarse-to-fine vessel segmentation network for OCTA images (OCTA-Net), with the ability to detect thick and thin vessels separately. In the OCTA-Net, a split-based coarse segmentation module is first utilized to produce a preliminary confidence map of vessels, and a split-based refined segmentation module is then used to optimize the shape/contour of the retinal microvasculature. We perform a thorough evaluation of the state-of-the-art vessel segmentation models and our OCTA-Net on the constructed ROSE dataset. The experimental results demonstrate that our OCTA-Net yields better vessel segmentation performance in OCTA than both traditional and other deep learning methods. In addition, we provide a fractal dimension analysis on the segmented microvasculature, and the statistical analysis demonstrates significant differences between the healthy control and Alzheimer's Disease group. This consolidates that the analysis of retinal microvasculature may offer a new scheme to study various neurodegenerative diseases.
Collapse
|
35
|
Deshpande A, Jamilpour N, Jiang B, Michel P, Eskandari A, Kidwell C, Wintermark M, Laksari K. Automatic segmentation, feature extraction and comparison of healthy and stroke cerebral vasculature. Neuroimage Clin 2021; 30:102573. [PMID: 33578323 PMCID: PMC7875826 DOI: 10.1016/j.nicl.2021.102573] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Revised: 01/13/2021] [Accepted: 01/16/2021] [Indexed: 02/01/2023]
Abstract
Accurate segmentation of cerebral vasculature and a quantitative assessment of its morphology is critical to various diagnostic and therapeutic purposes and is pertinent to studying brain health and disease. However, this is still a challenging task due to the complexity of the vascular imaging data. We propose an automated method for cerebral vascular segmentation without the need of any manual intervention as well as a method to skeletonize the binary segmented map to extract vascular geometric features and characterize vessel structure. We combine a Hessian-based probabilistic vessel-enhancing filtering with an active-contour-based technique to segment magnetic resonance and computed tomography angiograms (MRA and CTA) and subsequently extract the vessel centerlines and diameters to calculate the geometrical properties of the vasculature. Our method was validated using a 3D phantom of the Circle-of-Willis region, demonstrating 84% mean Dice similarity coefficient (DSC) and 85% mean Pearson's correlation coefficient (PCC) with minimal modified Hausdorff distance (MHD) error (3 surface pixels at most), and showed superior performance compared to existing segmentation algorithms upon quantitative comparison using DSC, PCC and MHD. We subsequently applied our algorithm to a dataset of 40 subjects, including 1) MRA scans of healthy subjects (n = 10, age = 30 ± 9), 2) MRA scans of stroke patients (n = 10, age = 51 ± 15), 3) CTA scans of healthy subjects (n = 10, age = 62 ± 12), and 4) CTA scans of stroke patients (n = 10, age = 68 ± 11), and obtained a quantitative comparison between the stroke and normal vasculature for both imaging modalities. The vascular network in stroke patients compared to age-adjusted healthy subjects was found to have a significantly (p < 0.05) higher tortuosity (3.24 ± 0.88 rad/cm vs. 7.17 ± 1.61 rad/cm for MRA, and 4.36 ± 1.32 rad/cm vs. 7.80 ± 0.92 rad/cm for CTA), higher fractal dimension (1.36 ± 0.28 vs. 1.71 ± 0.14 for MRA, and 1.56 ± 0.05 vs. 1.69 ± 0.20 for CTA), lower total length (3.46 ± 0.99 m vs. 2.20 ± 0.67 m for CTA), lower total volume (61.80 ± 18.79 ml vs. 34.43 ± 22.9 ml for CTA), lower average diameter (2.4 ± 0.21 mm vs. 2.18 ± 0.07 mm for CTA), and lower average branch length (4.81 ± 1.97 mm vs. 8.68 ± 2.03 mm for MRA), respectively. We additionally studied the change in vascular features with respect to aging and imaging modality. While we observed differences between features as a result of aging, statistical analysis did not show any significant differences, whereas we found that the number of branches were significantly different (p < 0.05) between the two imaging modalities (201 ± 73 for MRA vs. 189 ± 69 for CTA). Our segmentation and feature extraction algorithm can be applied on any imaging modality and can be used in the future to automatically obtain the 3D segmented vasculature for diagnosis and treatment planning as well as to study morphological changes due to stroke and other cerebrovascular diseases (CVD) in the clinic.
Collapse
Affiliation(s)
- Aditi Deshpande
- Department of Biomedical Engineering, University of Arizona, United States
| | - Nima Jamilpour
- Department of Biomedical Engineering, University of Arizona, United States
| | - Bin Jiang
- Department of Radiology, Stanford University, United States
| | - Patrik Michel
- Department of Neurology, Centre Hospitalier Universitaire Vaudois, Lausanne, Switzerland
| | - Ashraf Eskandari
- Department of Neurology, Centre Hospitalier Universitaire Vaudois, Lausanne, Switzerland
| | - Chelsea Kidwell
- Department of Neurology, University of Arizona, United States
| | - Max Wintermark
- Department of Radiology, Stanford University, United States
| | - Kaveh Laksari
- Department of Biomedical Engineering, University of Arizona, United States; Department of Aerospace and Mechanical Engineering, University of Arizona, United States.
| |
Collapse
|
36
|
Wang D, Haytham A, Pottenburgh J, Saeedi O, Tao Y. Hard Attention Net for Automatic Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2020; 24:3384-3396. [DOI: 10.1109/jbhi.2020.3002985] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
37
|
Zhang D, Yang G, Zhao S, Zhang Y, Ghista D, Zhang H, Li S. Direct Quantification of Coronary Artery Stenosis Through Hierarchical Attentive Multi-View Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4322-4334. [PMID: 32804646 DOI: 10.1109/tmi.2020.3017275] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Quantification of coronary artery stenosis on X-ray angiography (XRA) images is of great importance during the intraoperative treatment of coronary artery disease. It serves to quantify the coronary artery stenosis by estimating the clinical morphological indices, which are essential in clinical decision making. However, stenosis quantification is still a challenging task due to the overlapping, diversity and small-size region of the stenosis in the XRA images. While efforts have been devoted to stenosis quantification through low-level features, these methods have difficulty in learning the real mapping from these features to the stenosis indices. These methods are still cumbersome and unreliable for the intraoperative procedures due to their two-phase quantification, which depends on the results of segmentation or reconstruction of the coronary artery. In this work, we are proposing a hierarchical attentive multi-view learning model (HEAL) to achieve a direct quantification of coronary artery stenosis, without the intermediate segmentation or reconstruction. We have designed a multi-view learning model to learn more complementary information of the stenosis from different views. For this purpose, an intra-view hierarchical attentive block is proposed to learn the discriminative information of stenosis. Additionally, a stenosis representation learning module is developed to extract the multi-scale features from the keyframe perspective for considering the clinical workflow. Finally, the morphological indices are directly estimated based on the multi-view feature embedding. Extensive experiment studies on clinical multi-manufacturer dataset consisting of 228 subjects show the superiority of our HEAL against nine comparing methods, including direct quantification methods and multi-view learning methods. The experimental results demonstrate the better clinical agreement between the ground truth and the prediction, which endows our proposed method with a great potential for the efficient intraoperative treatment of coronary artery disease.
Collapse
|
38
|
Mou L, Zhao Y, Fu H, Liu Y, Cheng J, Zheng Y, Su P, Yang J, Chen L, Frangi AF, Akiba M, Liu J. CS 2-Net: Deep learning segmentation of curvilinear structures in medical imaging. Med Image Anal 2020; 67:101874. [PMID: 33166771 DOI: 10.1016/j.media.2020.101874] [Citation(s) in RCA: 134] [Impact Index Per Article: 26.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 08/26/2020] [Accepted: 10/05/2020] [Indexed: 12/20/2022]
Abstract
Automated detection of curvilinear structures, e.g., blood vessels or nerve fibres, from medical and biomedical images is a crucial early step in automatic image interpretation associated to the management of many diseases. Precise measurement of the morphological changes of these curvilinear organ structures informs clinicians for understanding the mechanism, diagnosis, and treatment of e.g. cardiovascular, kidney, eye, lung, and neurological conditions. In this work, we propose a generic and unified convolution neural network for the segmentation of curvilinear structures and illustrate in several 2D/3D medical imaging modalities. We introduce a new curvilinear structure segmentation network (CS2-Net), which includes a self-attention mechanism in the encoder and decoder to learn rich hierarchical representations of curvilinear structures. Two types of attention modules - spatial attention and channel attention - are utilized to enhance the inter-class discrimination and intra-class responsiveness, to further integrate local features with their global dependencies and normalization, adaptively. Furthermore, to facilitate the segmentation of curvilinear structures in medical images, we employ a 1×3 and a 3×1 convolutional kernel to capture boundary features. Besides, we extend the 2D attention mechanism to 3D to enhance the network's ability to aggregate depth information across different layers/slices. The proposed curvilinear structure segmentation network is thoroughly validated using both 2D and 3D images across six different imaging modalities. Experimental results across nine datasets show the proposed method generally outperforms other state-of-the-art algorithms in various metrics.
Collapse
Affiliation(s)
- Lei Mou
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Yonghuai Liu
- Department of Computer Science, Edge Hill University, Ormskirk, UK
| | - Jun Cheng
- UBTech Research, UBTech Robotics Corp Ltd, Shenzhen, China
| | - Yalin Zheng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, UK; Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Pan Su
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Jianlong Yang
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Li Chen
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, China
| | - Alejandro F Frangi
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China; Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing and School of Medicine, University of Leeds, Leeds, UK; Leeds Institute of Cardiovascular and Metabolic Medicine, School of Medicine, University of Leeds, Leeds, UK; Medical Imaging Research Centre (MIRC), University Hospital Gasthuisberg, Cardiovascular Sciences and Electrical Engineering Departments, KU Leuven, Leuven, Belgium
| | | | - Jiang Liu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China; Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China; Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China.
| |
Collapse
|
39
|
Zhao Y, Zhang J, Pereira E, Zheng Y, Su P, Xie J, Zhao Y, Shi Y, Qi H, Liu J, Liu Y. Automated Tortuosity Analysis of Nerve Fibers in Corneal Confocal Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2725-2737. [PMID: 32078542 DOI: 10.1109/tmi.2020.2974499] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Precise characterization and analysis of corneal nerve fiber tortuosity are of great importance in facilitating examination and diagnosis of many eye-related diseases. In this paper we propose a fully automated method for image-level tortuosity estimation, comprising image enhancement, exponential curvature estimation, and tortuosity level classification. The image enhancement component is based on an extended Retinex model, which not only corrects imbalanced illumination and improves image contrast in an image, but also models noise explicitly to aid removal of imaging noise. Afterwards, we take advantage of exponential curvature estimation in the 3D space of positions and orientations to directly measure curvature based on the enhanced images, rather than relying on the explicit segmentation and skeletonization steps in a conventional pipeline usually with accumulated pre-processing errors. The proposed method has been applied over two corneal nerve microscopy datasets for the estimation of a tortuosity level for each image. The experimental results show that it performs better than several selected state-of-the-art methods. Furthermore, we have performed manual gradings at tortuosity level of four hundred and three corneal nerve microscopic images, and this dataset has been released for public access to facilitate other researchers in the community in carrying out further research on the same and related topics.
Collapse
|
40
|
Tan B, Sim R, Chua J, Wong DWK, Yao X, Garhöfer G, Schmidl D, Werkmeister RM, Schmetterer L. Approaches to quantify optical coherence tomography angiography metrics. ANNALS OF TRANSLATIONAL MEDICINE 2020; 8:1205. [PMID: 33241054 PMCID: PMC7576021 DOI: 10.21037/atm-20-3246] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Accepted: 06/16/2020] [Indexed: 12/13/2022]
Abstract
Optical coherence tomography (OCT) has revolutionized the field of ophthalmology in the last three decades. As an OCT extension, OCT angiography (OCTA) utilizes a fast OCT system to detect motion contrast in ocular tissue and provides a three-dimensional representation of the ocular vasculature in a non-invasive, dye-free manner. The first OCT machine equipped with OCTA function was approved by U.S. Food and Drug Administration in 2016 and now it is widely applied in clinics. To date, numerous methods have been developed to aid OCTA interpretation and quantification. In this review, we focused on the workflow of OCTA-based interpretation, beginning from the generation of the OCTA images using signal decorrelation, which we divided into intensity-based, phase-based and phasor-based methods. We further discussed methods used to address image artifacts that are commonly observed in clinical settings, to the algorithms for image enhancement, binarization, and OCTA metrics extraction. We believe a better grasp of these technical aspects of OCTA will enhance the understanding of the technology and its potential application in disease diagnosis and management. Moreover, future studies will also explore the use of ocular OCTA as a window to link ocular vasculature to the function of other organs such as the kidney and brain.
Collapse
Affiliation(s)
- Bingyao Tan
- Institute for Health Technologies, Nanyang Technological University, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore, Singapore
| | - Ralene Sim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Jacqueline Chua
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
| | - Damon W. K. Wong
- Institute for Health Technologies, Nanyang Technological University, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore, Singapore
| | - Xinwen Yao
- Institute for Health Technologies, Nanyang Technological University, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore, Singapore
| | - Gerhard Garhöfer
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
| | - Doreen Schmidl
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
| | - René M. Werkmeister
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE) Program, Nanyang Technological University, Singapore, Singapore
- Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore, Singapore
- Department of Ophthalmology, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| |
Collapse
|
41
|
Yang J, Dong X, Hu Y, Peng Q, Tao G, Ou Y, Cai H, Yang X. Fully Automatic Arteriovenous Segmentation in Retinal Images via Topology-Aware Generative Adversarial Networks. Interdiscip Sci 2020; 12:323-334. [DOI: 10.1007/s12539-020-00385-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 06/16/2020] [Accepted: 07/08/2020] [Indexed: 10/23/2022]
|
42
|
Tian Y, Hu Y, Ma Y, Hao H, Mou L, Yang J, Zhao Y, Liu J. Multi-scale U-net with Edge Guidance for Multimodal Retinal Image Deformable Registration. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1360-1363. [PMID: 33018241 DOI: 10.1109/embc44109.2020.9175613] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Registration of multimodal retinal images is of great importance in facilitating the diagnosis and treatment of many eye diseases, such as the registration between color fundus images and optical coherence tomography (OCT) images. However, it is difficult to obtain ground truth, and most existing algorithms are for rigid registration without considering the optical distortion. In this paper, we present an unsupervised learning method for deformable registration between the two images. To solve the registration problem, the structure achieves a multi-level receptive field and takes contour and local detail into account. To measure the edge difference caused by different distortions in the optics center and edge, an edge similarity (ES) loss term is proposed, so loss function is composed by local cross-correlation, edge similarity and diffusion regularizer on the spatial gradients of the deformation matrix. Thus, we propose a multi-scale input layer, U-net with dilated convolution structure, squeeze excitation (SE) block and spatial transformer layers. Quantitative experiments prove the proposed framework is best compared with several conventional and deep learningbased methods, and our ES loss and structure combined with Unet and multi-scale layers achieve competitive results for normal and abnormal images.
Collapse
|
43
|
Mao X, Zhao Y, Chen B, Ma Y, Gu Z, Gu S, Yang J, Cheng J, Liu J. Deep Learning with Skip Connection Attention for Choroid Layer Segmentation in OCT Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1641-1645. [PMID: 33018310 DOI: 10.1109/embc44109.2020.9175631] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Since the thickness and shape of the choroid layer are indicators for the diagnosis of several ophthalmic diseases, the choroid layer segmentation is an important task. There exist many challenges in segmentation of the choroid layer. In this paper, in view of the lack of context information due to the ambiguous boundaries, and the subsequent inconsistent predictions of the same category targets ascribed to the lack of context information or the large regions, a novel Skip Connection Attention (SCA) module which is integrated into the U-Shape architecture is proposed to improve the precision of choroid layer segmentation in Optical Coherence Tomography (OCT) images. The main function of the SCA module is to capture the global context in the highest level to provide the decoder with stage-by-stage guidance, to extract more context information and generate more consistent predictions for the same class targets. By integrating the SCA module into the U-Net and CE-Net, we show that the module improves the accuracy of the choroid layer segmentation.
Collapse
|
44
|
Yan Q, Chen B, Hu Y, Cheng J, Gong Y, Yang J, Liu J, Zhao Y. Speckle reduction of OCT via super resolution reconstruction and its application on retinal layer segmentation. Artif Intell Med 2020; 106:101871. [DOI: 10.1016/j.artmed.2020.101871] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Revised: 02/17/2020] [Accepted: 05/02/2020] [Indexed: 10/24/2022]
|
45
|
Hao D, Ding S, Qiu L, Lv Y, Fei B, Zhu Y, Qin B. Sequential vessel segmentation via deep channel attention network. Neural Netw 2020; 128:172-187. [PMID: 32447262 DOI: 10.1016/j.neunet.2020.05.005] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2019] [Revised: 04/22/2020] [Accepted: 05/04/2020] [Indexed: 02/01/2023]
Abstract
Accurately segmenting contrast-filled vessels from X-ray coronary angiography (XCA) image sequence is an essential step for the diagnosis and therapy of coronary artery disease. However, developing automatic vessel segmentation is particularly challenging due to the overlapping structures, low contrast and the presence of complex and dynamic background artifacts in XCA images. This paper develops a novel encoder-decoder deep network architecture which exploits the several contextual frames of 2D+t sequential images in a sliding window centered at current frame to segment 2D vessel masks from the current frame. The architecture is equipped with temporal-spatial feature extraction in encoder stage, feature fusion in skip connection layers and channel attention mechanism in decoder stage. In the encoder stage, a series of 3D convolutional layers are employed to hierarchically extract temporal-spatial features. Skip connection layers subsequently fuse the temporal-spatial feature maps and deliver them to the corresponding decoder stages. To efficiently discriminate vessel features from the complex and noisy backgrounds in the XCA images, the decoder stage effectively utilizes channel attention blocks to refine the intermediate feature maps from skip connection layers for subsequently decoding the refined features in 2D ways to produce the segmented vessel masks. Furthermore, Dice loss function is implemented to train the proposed deep network in order to tackle the class imbalance problem in the XCA data due to the wide distribution of complex background artifacts. Extensive experiments by comparing our method with other state-of-the-art algorithms demonstrate the proposed method's superior performance over other methods in terms of the quantitative metrics and visual validation. To facilitate the reproductive research in XCA community, we publicly release our dataset and source codes at https://github.com/Binjie-Qin/SVS-net.
Collapse
Affiliation(s)
- Dongdong Hao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Song Ding
- Department of Cardiology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200127, China
| | - Linwei Qiu
- School of Astronautics, Beihang University, Beijing 100191, China
| | - Yisong Lv
- School of Continuing Education, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Baowei Fei
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, University of Texas at Dallas, Richardson, TX 75080, USA
| | - Yueqi Zhu
- Department of Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai Jiao Tong University, 600 Yi Shan Road, Shanghai 200233, China
| | - Binjie Qin
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China.
| |
Collapse
|
46
|
Mou L, Chen L, Cheng J, Gu Z, Zhao Y, Liu J. Dense Dilated Network With Probability Regularized Walk for Vessel Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1392-1403. [PMID: 31675323 DOI: 10.1109/tmi.2019.2950051] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The detection of retinal vessel is of great importance in the diagnosis and treatment of many ocular diseases. Many methods have been proposed for vessel detection. However, most of the algorithms neglect the connectivity of the vessels, which plays an important role in the diagnosis. In this paper, we propose a novel method for retinal vessel detection. The proposed method includes a dense dilated network to get an initial detection of the vessels and a probability regularized walk algorithm to address the fracture issue in the initial detection. The dense dilated network integrates newly proposed dense dilated feature extraction blocks into an encoder-decoder structure to extract and accumulate features at different scales. A multi-scale Dice loss function is adopted to train the network. To improve the connectivity of the segmented vessels, we also introduce a probability regularized walk algorithm to connect the broken vessels. The proposed method has been applied on three public data sets: DRIVE, STARE and CHASE_DB1. The results show that the proposed method outperforms the state-of-the-art methods in accuracy, sensitivity, specificity and also area under receiver operating characteristic curve.
Collapse
|
47
|
Zhang J, Qiao Y, Sarabi MS, Khansari MM, Gahm JK, Kashani AH, Shi Y. 3D Shape Modeling and Analysis of Retinal Microvasculature in OCT-Angiography Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1335-1346. [PMID: 31647423 PMCID: PMC7174137 DOI: 10.1109/tmi.2019.2948867] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
3D optical coherence tomography angiography (OCT-A) is a novel and non-invasive imaging modality for analyzing retinal diseases. The studies of microvasculature in 2D en face projection images have been widely implemented, but comprehensive 3D analysis of OCT-A images with rich depth-resolved microvascular information is rarely considered. In this paper, we propose a robust, effective, and automatic 3D shape modeling framework to provide a high-quality 3D vessel representation and to preserve valuable 3D geometric and topological information for vessel analysis. Effective vessel enhancement and extraction steps by means of curvelet denoising and optimally oriented flux (OOF) filtering are first designed to produce 3D microvascular networks. Afterwards, a novel 3D data representation of OCT-A microvasculature is reconstructed via advanced mesh reconstruction techniques. Based on the 3D surfaces, shape analysis is established to extract novel shape-based microvascular area distortion via the Laplace-Beltrami eigen-projection. The extracted feature is integrated into a graph-cut segmentation system to categorize large vessels and small capillaries for more precise shape analysis. The proposed framework is validated on a dedicated repeated scan dataset including 260 volume images and shows high repeatability. Statistical analysis using the surface area biomarker is performed on small capillaries to avoid the effect of tailing artifact from large vessels. It shows significant differences ( ) between DR stages on 100 subjects in a OCTA-DR dataset. The proposed shape modeling and analysis framework opens the possibility for further investigating OCT-A microvasculature in a new perspective.
Collapse
Affiliation(s)
- Jiong Zhang
- Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; USC Roski Eye Institute, Keck School of Medicine of University of Southern California, Los Angeles, CA 90033, USA
| | - Yuchuan Qiao
- Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA
| | - Mona Sharifi Sarabi
- Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA
| | - Maziyar M. Khansari
- Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; USC Roski Eye Institute, Keck School of Medicine of University of Southern California, Los Angeles, CA 90033, USA
| | - Jin K. Gahm
- Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA
| | - Amir H. Kashani
- USC Roski Eye Institute, Keck School of Medicine of University of Southern California, Los Angeles, CA 90033, USA
| | | |
Collapse
|
48
|
Cerebrovascular segmentation from TOF-MRA using model- and data-driven method via sparse labels. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.092] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
49
|
Zhao Y, Xie J, Zhang H, Zheng Y, Zhao Y, Qi H, Zhao Y, Su P, Liu J, Liu Y. Retinal Vascular Network Topology Reconstruction and Artery/Vein Classification via Dominant Set Clustering. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:341-356. [PMID: 31283498 DOI: 10.1109/tmi.2019.2926492] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The estimation of vascular network topology in complex networks is important in understanding the relationship between vascular changes and a wide spectrum of diseases. Automatic classification of the retinal vascular trees into arteries and veins is of direct assistance to the ophthalmologist in terms of diagnosis and treatment of eye disease. However, it is challenging due to their projective ambiguity and subtle changes in appearance, contrast, and geometry in the imaging process. In this paper, we propose a novel method that is capable of making the artery/vein (A/V) distinction in retinal color fundus images based on vascular network topological properties. To this end, we adapt the concept of dominant set clustering and formalize the retinal blood vessel topology estimation and the A/V classification as a pairwise clustering problem. The graph is constructed through image segmentation, skeletonization, and identification of significant nodes. The edge weight is defined as the inverse Euclidean distance between its two end points in the feature space of intensity, orientation, curvature, diameter, and entropy. The reconstructed vascular network is classified into arteries and veins based on their intensity and morphology. The proposed approach has been applied to five public databases, namely INSPIRE, IOSTAR, VICAVR, DRIVE, and WIDE, and achieved high accuracies of 95.1%, 94.2%, 93.8%, 91.1%, and 91.0%, respectively. Furthermore, we have made manual annotations of the blood vessel topologies for INSPIRE, IOSTAR, VICAVR, and DRIVE datasets, and these annotations are released for public access so as to facilitate researchers in the community.
Collapse
|
50
|
Hao H, Zhao Y, Fu H, Shang Q, Li F, Zhang X, Liu J. Anterior Chamber Angles Classification in Anterior Segment OCT Images via Multi-Scale Regions Convolutional Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:849-852. [PMID: 31946028 DOI: 10.1109/embc.2019.8857615] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Angle-closure glaucoma is one of the major causes of blindness in Asia. In this paper, we present a new approach for the classification of the anterior chamber angles into open, narrowed, and closure, in anterior segment optical coherence tomography (AS-OCT), by learning the manual annotations from gonioscopy, so as to further assist the assessment of angle-closure glaucoma. The proposed framework firstly localizes the anterior chamber angle region automatically, which is the primary structural image cue for clinically identifying glaucoma. Then three scales of cropped chamber angle images are fed into our Multi-Scale Regions Convolutional Neural Networks (MSRCNN) architecture, in which three parallel convolutional neural networks are applied to extract feature representations. Finally, the representations are stacked to fully-connected layer for glaucoma type classification. The proposed method is evaluated across a dataset of 9728 anterior chamber angle images, and the experimental results show that the proposed method outperforms existing state-of-the-art methods in applicability, effectiveness, and accuracy.
Collapse
|