1
|
Li Y, Zhang R, Dong L, Shi X, Zhou W, Wu H, Li H, Yu C, Wei W. Predicting systemic diseases in fundus images: systematic review of setting, reporting, bias, and models' clinical availability in deep learning studies. Eye (Lond) 2024; 38:1246-1251. [PMID: 38238576 PMCID: PMC11076532 DOI: 10.1038/s41433-023-02914-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 11/10/2023] [Accepted: 12/20/2023] [Indexed: 05/09/2024] Open
Abstract
BACKGROUND Analyzing fundus images with deep learning techniques is promising for screening systematic diseases. However, the quality of the rapidly increasing number of studies was variable and lacked systematic evaluation. OBJECTIVE To systematically review all the articles that aimed to predict systemic parameters and conditions using fundus image and deep learning, assessing their performance, and providing suggestions that would enable translation into clinical practice. METHODS Two major electronic databases (MEDLINE and EMBASE) were searched until August 22, 2023, with keywords 'deep learning' and 'fundus'. Studies using deep learning and fundus images to predict systematic parameters were included, and assessed in four aspects: study characteristics, transparent reporting, risk of bias, and clinical availability. Transparent reporting was assessed by the TRIPOD statement, while the risk of bias was assessed by PROBAST. RESULTS 4969 articles were identified through systematic research. Thirty-one articles were included in the review. A variety of vascular and non-vascular diseases can be predicted by fundus images, including diabetes and related diseases (19%), sex (22%) and age (19%). Most of the studies focused on developed countries. The models' reporting was insufficient in determining sample size and missing data treatment according to the TRIPOD. Full access to datasets and code was also under-reported. 1/31(3.2%) study was classified as having a low risk of bias overall, whereas 30/31(96.8%) were classified as having a high risk of bias according to the PROBAST. 5/31(16.1%) of studies used prospective external validation cohorts. Only two (6.4%) described the study's calibration. The number of publications by year increased significantly from 2018 to 2023. However, only two models (6.5%) were applied to the device, and no model has been applied in clinical. CONCLUSION Deep learning fundus images have shown great potential in predicting systematic conditions in clinical situations. Further work needs to be done to improve the methodology and clinical application.
Collapse
Affiliation(s)
- Yitong Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Ruiheng Zhang
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xuhan Shi
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wenda Zhou
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Haotian Wu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Heyan Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chuyao Yu
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China.
| |
Collapse
|
2
|
Chen C, Chen Y, Li X, Ning H, Xiao R. Linear semantic transformation for semi-supervised medical image segmentation. Comput Biol Med 2024; 173:108331. [PMID: 38522252 DOI: 10.1016/j.compbiomed.2024.108331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 02/29/2024] [Accepted: 03/17/2024] [Indexed: 03/26/2024]
Abstract
Medical image segmentation is a focus research and foundation in developing intelligent medical systems. Recently, deep learning for medical image segmentation has become a standard process and succeeded significantly, promoting the development of reconstruction, and surgical planning of disease diagnosis. However, semantic learning is often inefficient owing to the lack of supervision of feature maps, resulting in that high-quality segmentation models always rely on numerous and accurate data annotations. Learning robust semantic representation in latent spaces remains a challenge. In this paper, we propose a novel semi-supervised learning framework to learn vital attributes in medical images, which constructs generalized representation from diverse semantics to realize medical image segmentation. We first build a self-supervised learning part that achieves context recovery by reconstructing space and intensity of medical images, which conduct semantic representation for feature maps. Subsequently, we combine semantic-rich feature maps and utilize simple linear semantic transformation to convert them into image segmentation. The proposed framework was tested using five medical segmentation datasets. Quantitative assessments indicate the highest scores of our method on IXI (73.78%), ScaF (47.50%), COVID-19-Seg (50.72%), PC-Seg (65.06%), and Brain-MR (72.63%) datasets. Finally, we compared our method with the latest semi-supervised learning methods and obtained 77.15% and 75.22% DSC values, respectively, ranking first on two representative datasets. The experimental results not only proved that the proposed linear semantic transformation was effectively applied to medical image segmentation, but also presented its simplicity and ease-of-use to pursue robust segmentation in semi-supervised learning. Our code is now open at: https://github.com/QingYunA/Linear-Semantic-Transformation-for-Semi-Supervised-Medical-Image-Segmentation.
Collapse
Affiliation(s)
- Cheng Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Yunqing Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Xiaoheng Li
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Huansheng Ning
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Ruoxiu Xiao
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China; Shunde Innovation School, University of Science and Technology Beijing, Foshan, 100024, China.
| |
Collapse
|
3
|
Shi D, Zhou Y, He S, Wagner SK, Huang Y, Keane PA, Ting DS, Zhang L, Zheng Y, He M. Cross-modality Labeling Enables Noninvasive Capillary Quantification as a Sensitive Biomarker for Assessing Cardiovascular Risk. OPHTHALMOLOGY SCIENCE 2024; 4:100441. [PMID: 38420613 PMCID: PMC10899028 DOI: 10.1016/j.xops.2023.100441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 11/26/2023] [Accepted: 11/27/2023] [Indexed: 03/02/2024]
Abstract
Purpose We aim to use fundus fluorescein angiography (FFA) to label the capillaries on color fundus (CF) photographs and train a deep learning model to quantify retinal capillaries noninvasively from CF and apply it to cardiovascular disease (CVD) risk assessment. Design Cross-sectional and longitudinal study. Participants A total of 90732 pairs of CF-FFA images from 3893 participants for segmentation model development, and 49229 participants in the UK Biobank for association analysis. Methods We matched the vessels extracted from FFA and CF, and used vessels from FFA as labels to train a deep learning model (RMHAS-FA) to segment retinal capillaries using CF. We tested the model's accuracy on a manually labeled internal test set (FundusCapi). For external validation, we tested the segmentation model on 7 vessel segmentation datasets, and investigated the clinical value of the segmented vessels in predicting CVD events in the UK Biobank. Main Outcome Measures Area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity for segmentation. Hazard ratio (HR; 95% confidence interval [CI]) for Cox regression analysis. Results On the FundusCapi dataset, the segmentation performance was AUC = 0.95, accuracy = 0.94, sensitivity = 0.90, and specificity = 0.93. Smaller vessel skeleton density had a stronger correlation with CVD risk factors and incidence (P < 0.01). Reduced density of small vessel skeletons was strongly associated with an increased risk of CVD incidence and mortality for women (HR [95% CI] = 0.91 [0.84-0.98] and 0.68 [0.54-0.86], respectively). Conclusions Using paired CF-FFA images, we automated the laborious manual labeling process and enabled noninvasive capillary quantification from CF, supporting its potential as a sensitive screening method for identifying individuals at high risk of future CVD events. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Danli Shi
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yukun Zhou
- Centre for Medical Image Computing, University College London, London, UK
| | - Shuang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Siegfried K. Wagner
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Yu Huang
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Pearse A. Keane
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Daniel S.W. Ting
- Singapore National Eye Center, Singapore Eye Research Institute, and Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Lei Zhang
- Faculty of Medicine, Central Clinical School, Monash University, Melbourne, Victoria, Australia
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Mingguang He
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| |
Collapse
|
4
|
Chen J, Li M, Han H, Zhao Z, Chen X. SurgNet: Self-Supervised Pretraining With Semantic Consistency for Vessel and Instrument Segmentation in Surgical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1513-1525. [PMID: 38090838 DOI: 10.1109/tmi.2023.3341948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Blood vessel and surgical instrument segmentation is a fundamental technique for robot-assisted surgical navigation. Despite the significant progress in natural image segmentation, surgical image-based vessel and instrument segmentation are rarely studied. In this work, we propose a novel self-supervised pretraining method (SurgNet) that can effectively learn representative vessel and instrument features from unlabeled surgical images. As a result, it allows for precise and efficient segmentation of vessels and instruments with only a small amount of labeled data. Specifically, we first construct a region adjacency graph (RAG) based on local semantic consistency in unlabeled surgical images and use it as a self-supervision signal for pseudo-mask segmentation. We then use the pseudo-mask to perform guided masked image modeling (GMIM) to learn representations that integrate structural information of intraoperative objectives more effectively. Our pretrained model, paired with various segmentation methods, can be applied to perform vessel and instrument segmentation accurately using limited labeled data for fine-tuning. We build an Intraoperative Vessel and Instrument Segmentation (IVIS) dataset, comprised of ~3 million unlabeled images and over 4,000 labeled images with manual vessel and instrument annotations to evaluate the effectiveness of our self-supervised pretraining method. We also evaluated the generalizability of our method to similar tasks using two public datasets. The results demonstrate that our approach outperforms the current state-of-the-art (SOTA) self-supervised representation learning methods in various surgical image segmentation tasks.
Collapse
|
5
|
Chen Q, Peng J, Zhao S, Liu W. Automatic artery/vein classification methods for retinal blood vessel: A review. Comput Med Imaging Graph 2024; 113:102355. [PMID: 38377630 DOI: 10.1016/j.compmedimag.2024.102355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 02/06/2024] [Accepted: 02/06/2024] [Indexed: 02/22/2024]
Abstract
Automatic retinal arteriovenous classification can assist ophthalmologists in disease early diagnosis. Deep learning-based methods and topological graph-based methods have become the main solutions for retinal arteriovenous classification in recent years. This paper reviews the automatic retinal arteriovenous classification methods from 2003 to 2022. Firstly, we compare different methods and provide comparison tables of the summary results. Secondly, we complete the classification of the public arteriovenous classification datasets and provide the annotation development tables of different datasets. Finally, we sort out the challenges of evaluation methods and provide a comprehensive evaluation system. Quantitative and qualitative analysis shows the changes in research hotspots over time, Quantitative and qualitative analyses reveal the evolution of research hotspots over time, highlighting the significance of exploring the integration of deep learning with topological information in future research.
Collapse
Affiliation(s)
- Qihan Chen
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
| | - Jianqing Peng
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China; Guangdong Provincial Key Laboratory of Fire Science and Technology, Guangzhou 510006, China.
| | - Shen Zhao
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China.
| | - Wanquan Liu
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
| |
Collapse
|
6
|
Zhou Y, Xu M, Hu Y, Blumberg SB, Zhao A, Wagner SK, Keane PA, Alexander DC. CF-Loss: Clinically-relevant feature optimised loss function for retinal multi-class vessel segmentation and vascular feature measurement. Med Image Anal 2024; 93:103098. [PMID: 38320370 DOI: 10.1016/j.media.2024.103098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 05/22/2023] [Accepted: 01/30/2024] [Indexed: 02/08/2024]
Abstract
Characterising clinically-relevant vascular features, such as vessel density and fractal dimension, can benefit biomarker discovery and disease diagnosis for both ophthalmic and systemic diseases. In this work, we explicitly encode vascular features into an end-to-end loss function for multi-class vessel segmentation, categorising pixels into artery, vein, uncertain pixels, and background. This clinically-relevant feature optimised loss function (CF-Loss) regulates networks to segment accurate multi-class vessel maps that produce precise vascular features. Our experiments first verify that CF-Loss significantly improves both multi-class vessel segmentation and vascular feature estimation, with two standard segmentation networks, on three publicly available datasets. We reveal that pixel-based segmentation performance is not always positively correlated with accuracy of vascular features, thus highlighting the importance of optimising vascular features directly via CF-Loss. Finally, we show that improved vascular features from CF-Loss, as biomarkers, can yield quantitative improvements in the prediction of ischaemic stroke, a real-world clinical downstream task. The code is available at https://github.com/rmaphoh/feature-loss.
Collapse
Affiliation(s)
- Yukun Zhou
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London EC1V 9EL, UK; Institute of Ophthalmology, University College London, London EC1V 9EL, UK.
| | - MouCheng Xu
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TS, UK
| | - Stefano B Blumberg
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Computer Science, University College London, London WC1E 6BT, UK
| | - An Zhao
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Computer Science, University College London, London WC1E 6BT, UK
| | - Siegfried K Wagner
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London EC1V 9EL, UK; Institute of Ophthalmology, University College London, London EC1V 9EL, UK
| | - Pearse A Keane
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London EC1V 9EL, UK; Institute of Ophthalmology, University College London, London EC1V 9EL, UK
| | - Daniel C Alexander
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Computer Science, University College London, London WC1E 6BT, UK
| |
Collapse
|
7
|
Li M, Huang K, Xu Q, Yang J, Zhang Y, Ji Z, Xie K, Yuan S, Liu Q, Chen Q. OCTA-500: A retinal dataset for optical coherence tomography angiography study. Med Image Anal 2024; 93:103092. [PMID: 38325155 DOI: 10.1016/j.media.2024.103092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 11/10/2023] [Accepted: 01/22/2024] [Indexed: 02/09/2024]
Abstract
Optical coherence tomography angiography (OCTA) is a novel imaging modality that has been widely utilized in ophthalmology and neuroscience studies to observe retinal vessels and microvascular systems. However, publicly available OCTA datasets remain scarce. In this paper, we introduce the largest and most comprehensive OCTA dataset dubbed OCTA-500, which contains OCTA imaging under two fields of view (FOVs) from 500 subjects. The dataset provides rich images and annotations including two modalities (OCT/OCTA volumes), six types of projections, four types of text labels (age/gender/eye/disease) and seven types of segmentation labels (large vessel/capillary/artery/vein/2D FAZ/3D FAZ/retinal layers). Then, we propose a multi-object segmentation task called CAVF, which integrates capillary segmentation, artery segmentation, vein segmentation, and FAZ segmentation under a unified framework. In addition, we optimize the 3D-to-2D image projection network (IPN) to IPN-V2 to serve as one of the segmentation baselines. Experimental results demonstrate that IPN-V2 achieves an about 10% mIoU improvement over IPN on CAVF task. Finally, we further study the impact of several dataset characteristics: the training set size, the model input (OCT/OCTA, 3D volume/2D projection), the baseline networks, and the diseases. The dataset and code are publicly available at: https://ieee-dataport.org/open-access/octa-500.
Collapse
Affiliation(s)
- Mingchao Li
- School of Computer Science and Engineering, Nanjing University of Science and Technology, NanJing 210094, China.
| | - Kun Huang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, NanJing 210094, China.
| | - Qiuzhuo Xu
- School of Computer Science and Engineering, Nanjing University of Science and Technology, NanJing 210094, China.
| | - Jiadong Yang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, NanJing 210094, China.
| | - Yuhan Zhang
- School of Computer Science and Engineering, Nanjing University of Science and Technology, NanJing 210094, China.
| | - Zexuan Ji
- School of Computer Science and Engineering, Nanjing University of Science and Technology, NanJing 210094, China.
| | - Keren Xie
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, NanJing 210029, China.
| | - Songtao Yuan
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, NanJing 210029, China.
| | - Qinghuai Liu
- Department of Ophthalmology, The First Affiliated Hospital with Nanjing Medical University, NanJing 210029, China.
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, NanJing 210094, China.
| |
Collapse
|
8
|
Hervella ÁS, Ramos L, Rouco J, Novo J, Ortega M. Explainable artificial intelligence for the automated assessment of the retinal vascular tortuosity. Med Biol Eng Comput 2024; 62:865-881. [PMID: 38060101 PMCID: PMC10881731 DOI: 10.1007/s11517-023-02978-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 11/22/2023] [Indexed: 12/08/2023]
Abstract
Retinal vascular tortuosity is an excessive bending and twisting of the blood vessels in the retina that is associated with numerous health conditions. We propose a novel methodology for the automated assessment of the retinal vascular tortuosity from color fundus images. Our methodology takes into consideration several anatomical factors to weigh the importance of each individual blood vessel. First, we use deep neural networks to produce a robust extraction of the different anatomical structures. Then, the weighting coefficients that are required for the integration of the different anatomical factors are adjusted using evolutionary computation. Finally, the proposed methodology also provides visual representations that explain the contribution of each individual blood vessel to the predicted tortuosity, hence allowing us to understand the decisions of the model. We validate our proposal in a dataset of color fundus images providing a consensus ground truth as well as the annotations of five clinical experts. Our proposal outperforms previous automated methods and offers a performance that is comparable to that of the clinical experts. Therefore, our methodology demonstrates to be a viable alternative for the assessment of the retinal vascular tortuosity. This could facilitate the use of this biomarker in clinical practice and medical research.
Collapse
Affiliation(s)
- Álvaro S Hervella
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain.
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - Lucía Ramos
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain
| | - José Rouco
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain
| | - Jorge Novo
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain
| | - Marcos Ortega
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain
- Grupo VARPA, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain
| |
Collapse
|
9
|
Shi D, He S, Yang J, Zheng Y, He M. One-shot Retinal Artery and Vein Segmentation via Cross-modality Pretraining. OPHTHALMOLOGY SCIENCE 2024; 4:100363. [PMID: 37868792 PMCID: PMC10585631 DOI: 10.1016/j.xops.2023.100363] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 06/29/2023] [Accepted: 06/30/2023] [Indexed: 10/24/2023]
Abstract
Purpose To perform one-shot retinal artery and vein segmentation with cross-modality artery-vein (AV) soft-label pretraining. Design Cross-sectional study. Subjects The study included 6479 color fundus photography (CFP) and arterial-venous fundus fluorescein angiography (FFA) pairs from 1964 participants for pretraining and 6 AV segmentation data sets with various image sources, including RITE, HRF, LES-AV, AV-WIDE, PortableAV, and DRSplusAV for one-shot finetuning and testing. Methods We structurally matched the arterial and venous phase of FFA with CFP, the AV soft labels were automatically generated by utilizing the fluorescein intensity difference of the arterial and venous-phase FFA images, and the soft labels were then used to train a generative adversarial network to learn to generate AV soft segmentations using CFP images as input. We then finetuned the pretrained model to perform AV segmentation using only one image from each of the AV segmentation data sets and test on the remainder. To investigate the effect and reliability of one-shot finetuning, we conducted experiments without finetuning and by finetuning the pretrained model on an iteratively different single image for each data set under the same experimental setting and tested the models on the remaining images. Main Outcome Measures The AV segmentation was assessed by area under the receiver operating characteristic curve (AUC), accuracy, Dice score, sensitivity, and specificity. Results After the FFA-AV soft label pretraining, our method required only one exemplar image from each camera or modality and achieved similar performance with full-data training, with AUC ranging from 0.901 to 0.971, accuracy from 0.959 to 0.980, Dice score from 0.585 to 0.773, sensitivity from 0.574 to 0.763, and specificity from 0.981 to 0.991. Compared with no finetuning, the segmentation performance improved after one-shot finetuning. When finetuned on different images in each data set, the standard deviation of the segmentation results across models ranged from 0.001 to 0.10. Conclusions This study presents the first one-shot approach to retinal artery and vein segmentation. The proposed labeling method is time-saving and efficient, demonstrating a promising direction for retinal-vessel segmentation and enabling the potential for widespread application. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Danli Shi
- Centre for Eye and Vision Research (CEVR), Hong Kong SAR, China
- The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Shuang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Jiancheng Yang
- Swiss Federal Institute of Technology in Lausanne (EPFL), Lausanne, Switzerland
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Mingguang He
- Centre for Eye and Vision Research (CEVR), Hong Kong SAR, China
- The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| |
Collapse
|
10
|
Van Eijgen J, Fhima J, Billen Moulin-Romsée MI, Behar JA, Christinaki E, Stalmans I. Leuven-Haifa High-Resolution Fundus Image Dataset for Retinal Blood Vessel Segmentation and Glaucoma Diagnosis. Sci Data 2024; 11:257. [PMID: 38424105 PMCID: PMC10904846 DOI: 10.1038/s41597-024-03086-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 02/21/2024] [Indexed: 03/02/2024] Open
Abstract
The Leuven-Haifa dataset contains 240 disc-centered fundus images of 224 unique patients (75 patients with normal tension glaucoma, 63 patients with high tension glaucoma, 30 patients with other eye diseases and 56 healthy controls) from the University Hospitals of Leuven. The arterioles and venules of these images were both annotated by master students in medicine and corrected by a senior annotator. All senior segmentation corrections are provided as well as the junior segmentations of the test set. An open-source toolbox for the parametrization of segmentations was developed. Diagnosis, age, sex, vascular parameters as well as a quality score are provided as metadata. Potential reuse is envisioned as the development or external validation of blood vessels segmentation algorithms or study of the vasculature in glaucoma and the development of glaucoma diagnosis algorithms. The dataset is available on the KU Leuven Research Data Repository (RDR).
Collapse
Affiliation(s)
- Jan Van Eijgen
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Oude Markt 13, 3000, Leuven, Belgium
- Department of Ophthalmology, University Hospitals UZ Leuven, Herestraat 49, 3000, Leuven, Belgium
| | - Jonathan Fhima
- Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
- Department of Applied Mathematics Technion-IIT, Haifa, Israel
| | | | - Joachim A Behar
- Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
| | - Eirini Christinaki
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Oude Markt 13, 3000, Leuven, Belgium
| | - Ingeborg Stalmans
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Oude Markt 13, 3000, Leuven, Belgium.
- Department of Ophthalmology, University Hospitals UZ Leuven, Herestraat 49, 3000, Leuven, Belgium.
| |
Collapse
|
11
|
Prethija G, Katiravan J. EAMR-Net: A multiscale effective spatial and cross-channel attention network for retinal vessel segmentation. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:4742-4761. [PMID: 38549347 DOI: 10.3934/mbe.2024208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/02/2024]
Abstract
Delineation of retinal vessels in fundus images is essential for detecting a range of eye disorders. An automated technique for vessel segmentation can assist clinicians and enhance the efficiency of the diagnostic process. Traditional methods fail to extract multiscale information, discard unnecessary information, and delineate thin vessels. In this paper, a novel residual U-Net architecture that incorporates multi-scale feature learning and effective attention is proposed to delineate the retinal vessels precisely. Since drop block regularization performs better than drop out in preventing overfitting, drop block was used in this study. A multi-scale feature learning module was added instead of a skip connection to learn multi-scale features. A novel effective attention block was proposed and integrated with the decoder block to obtain precise spatial and channel information. Experimental findings indicated that the proposed model exhibited outstanding performance in retinal vessel delineation. The sensitivities achieved for DRIVE, STARE, and CHASE_DB datasets were 0.8293, 0.8151 and 0.8084, respectively.
Collapse
Affiliation(s)
- G Prethija
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
| | - Jeevaa Katiravan
- Department of Information Technology, Velammal Engineering College, Chennai 600066, India
| |
Collapse
|
12
|
Sun K, Chen Y, Dong F, Wu Q, Geng J, Chen Y. Retinal vessel segmentation method based on RSP-SA Unet network. Med Biol Eng Comput 2024; 62:605-620. [PMID: 37964177 DOI: 10.1007/s11517-023-02960-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 10/28/2023] [Indexed: 11/16/2023]
Abstract
Segmenting retinal vessels plays a significant role in the diagnosis of fundus disorders. However, there are two problems in the retinal vessel segmentation methods. First, fine-grained features of fine blood vessels are difficult to be extracted. Second, it is easy to lose track of the details of blood vessel edges. To solve the problems above, the Residual SimAM Pyramid-Spatial Attention Unet (RSP-SA Unet) is proposed, in which the encoding, decoding, and upsampling layers of the Unet are mainly improved. Firstly, the RSP structure proposed in this paper approximates a residual structure combined with SimAM and Pyramid Segmentation Attention (PSA), which is applied to the encoding and decoding parts to extract multi-scale spatial information and important features across dimensions at a finer level. Secondly, the spatial attention (SA) is used in the upsampling layer to perform multi-attention mapping on the input feature map, which could enhance the segmentation effect of small blood vessels with low contrast. Finally, the RSP-SA Unet is verified on the CHASE_DB1, DRIVE, and STARE datasets, and the segmentation accuracy (ACC) of the RSP-SA Unet could reach 0.9763, 0.9704, and 0.9724, respectively. Area under the ROC curve (AUC) could reach 0.9896, 0.9858, and 0.9906, respectively. The RSP-SA Unet overall performance is better than the comparison methods.
Collapse
Affiliation(s)
- Kun Sun
- The Higher Educational Key Laboratory for Measuring & Control Technology and Instrumentation of Heilongjiang Province, Harbin University of Science and Technology, Harbin, China
- Teaching Demonstration Center for Measurement and Control Technology and Instrumentation, National Experimental, Harbin University of Science and Technology, Harbin, China
| | - Yang Chen
- The Higher Educational Key Laboratory for Measuring & Control Technology and Instrumentation of Heilongjiang Province, Harbin University of Science and Technology, Harbin, China
- Teaching Demonstration Center for Measurement and Control Technology and Instrumentation, National Experimental, Harbin University of Science and Technology, Harbin, China
| | - Fuxuan Dong
- The Higher Educational Key Laboratory for Measuring & Control Technology and Instrumentation of Heilongjiang Province, Harbin University of Science and Technology, Harbin, China
- Teaching Demonstration Center for Measurement and Control Technology and Instrumentation, National Experimental, Harbin University of Science and Technology, Harbin, China
| | - Qing Wu
- The Higher Educational Key Laboratory for Measuring & Control Technology and Instrumentation of Heilongjiang Province, Harbin University of Science and Technology, Harbin, China.
- Teaching Demonstration Center for Measurement and Control Technology and Instrumentation, National Experimental, Harbin University of Science and Technology, Harbin, China.
- Heilongjiang Province Key Laboratory of Laser Spectroscopy Technology and Application, Harbin University of Science and Technology, Harbin, China.
| | - Jiameng Geng
- The Higher Educational Key Laboratory for Measuring & Control Technology and Instrumentation of Heilongjiang Province, Harbin University of Science and Technology, Harbin, China
- Teaching Demonstration Center for Measurement and Control Technology and Instrumentation, National Experimental, Harbin University of Science and Technology, Harbin, China
| | - Yinsheng Chen
- The Higher Educational Key Laboratory for Measuring & Control Technology and Instrumentation of Heilongjiang Province, Harbin University of Science and Technology, Harbin, China
- Teaching Demonstration Center for Measurement and Control Technology and Instrumentation, National Experimental, Harbin University of Science and Technology, Harbin, China
| |
Collapse
|
13
|
Li ZZ, Zhao W, Mao Y, Bo D, Chen Q, Kojodjojo P, Zhang F. A machine learning approach to differentiate wide QRS tachycardia: distinguishing ventricular tachycardia from supraventricular tachycardia. J Interv Card Electrophysiol 2024:10.1007/s10840-024-01743-9. [PMID: 38246906 DOI: 10.1007/s10840-024-01743-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 01/07/2024] [Indexed: 01/23/2024]
Abstract
BACKGROUND Differential diagnosis of wide QRS tachycardia (WQCT) has been a challenging issue. Published algorithms to distinguish ventricular tachycardia (VT) and supraventricular tachycardia (SVT) have limited diagnostic capabilities. METHODS A total of 278 patients with WQCT from January 2010 to March 2022 were enrolled. The electrophysiological study confirmed SVT in 154 patients and VT in 65 ones. Two hundred nineteen WQCT 12-lead ECGs were randomly divided into development cohort (n = 165) and testing cohort (n = 54) data sets. The development cohort was split into a training group (n = 115) and an internal validation group (n = 50). Forty ECG features extracted from the 219 WQCT ECGs are fed into 9 iteratively trained ML algorithms. This novel ML algorithm was also compared with four published algorithms. RESULTS In the development cohort, the Gradient Boosting Machine (GBM) model displayed the maximum area under curve (AUC) (0.91, 95% confidence interval (CI) 0.81-1.00). In the testing cohort, the GBM model had a higher AUC of 0.97 compared to 4 validated ECG algorithms, namely, Brugada (0.68), avR (0.62), RWPTII (0.72), and LLA algorithms (0.70). Accuracy, sensitivity, specificity, negative predictive value, and positive predictive value of the GBM model were 0.94, 0.97, 0.90, 0.94, and 0.95, respectively. CONCLUSIONS A GBM ML model contributes to distinguishing SVT from VT based on surface ECG features. In addition, we were able to identify important indicators for distinguishing WQCT.
Collapse
Affiliation(s)
- Zhen-Zhen Li
- Section of Pacing and Electrophysiology, Division of Cardiology, First Affiliated Hospital of Nanjing Medical University, Guangzhou Road 300, Nanjing, 210006, Jiangsu, China
- Department of Cardiology, Nanjing BenQ Medical Center, The Affiliated BenQ Hospital of Nanjing Medical University, Nanjing, 210021, Jiangsu, China
| | - Wei Zhao
- Section of Pacing and Electrophysiology, Division of Cardiology, First Affiliated Hospital of Nanjing Medical University, Guangzhou Road 300, Nanjing, 210006, Jiangsu, China
| | - YangMing Mao
- Section of Pacing and Electrophysiology, Division of Cardiology, First Affiliated Hospital of Nanjing Medical University, Guangzhou Road 300, Nanjing, 210006, Jiangsu, China
| | - Dan Bo
- Section of Pacing and Electrophysiology, Division of Cardiology, First Affiliated Hospital of Nanjing Medical University, Guangzhou Road 300, Nanjing, 210006, Jiangsu, China
| | - QiuShi Chen
- Section of Pacing and Electrophysiology, Division of Cardiology, First Affiliated Hospital of Nanjing Medical University, Guangzhou Road 300, Nanjing, 210006, Jiangsu, China
| | | | - FengXiang Zhang
- Section of Pacing and Electrophysiology, Division of Cardiology, First Affiliated Hospital of Nanjing Medical University, Guangzhou Road 300, Nanjing, 210006, Jiangsu, China.
| |
Collapse
|
14
|
Jiang Y, Chen J, Yan W, Zhang Z, Qiao H, Wang M. MAG-Net : Multi-fusion network with grouped attention for retinal vessel segmentation. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:1938-1958. [PMID: 38454669 DOI: 10.3934/mbe.2024086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2024]
Abstract
Retinal vessel segmentation plays a vital role in the clinical diagnosis of ophthalmic diseases. Despite convolutional neural networks (CNNs) excelling in this task, challenges persist, such as restricted receptive fields and information loss from downsampling. To address these issues, we propose a new multi-fusion network with grouped attention (MAG-Net). First, we introduce a hybrid convolutional fusion module instead of the original encoding block to learn more feature information by expanding the receptive field. Additionally, the grouped attention enhancement module uses high-level features to guide low-level features and facilitates detailed information transmission through skip connections. Finally, the multi-scale feature fusion module aggregates features at different scales, effectively reducing information loss during decoder upsampling. To evaluate the performance of the MAG-Net, we conducted experiments on three widely used retinal datasets: DRIVE, CHASE and STARE. The results demonstrate remarkable segmentation accuracy, specificity and Dice coefficients. Specifically, the MAG-Net achieved segmentation accuracy values of 0.9708, 0.9773 and 0.9743, specificity values of 0.9836, 0.9875 and 0.9906 and Dice coefficients of 0.8576, 0.8069 and 0.8228, respectively. The experimental results demonstrate that our method outperforms existing segmentation methods exhibiting superior performance and segmentation outcomes.
Collapse
Affiliation(s)
- Yun Jiang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
| | - Jie Chen
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
| | - Wei Yan
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
| | - Zequn Zhang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
| | - Hao Qiao
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
| | - Meiqi Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
| |
Collapse
|
15
|
Song Y, Zou J, Choi KS, Lei B, Qin J. Cell classification with worse-case boosting for intelligent cervical cancer screening. Med Image Anal 2024; 91:103014. [PMID: 37913578 DOI: 10.1016/j.media.2023.103014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Revised: 10/10/2023] [Accepted: 10/20/2023] [Indexed: 11/03/2023]
Abstract
Cell classification underpins intelligent cervical cancer screening, a cytology examination that effectively decreases both the morbidity and mortality of cervical cancer. This task, however, is rather challenging, mainly due to the difficulty of collecting a training dataset representative sufficiently of the unseen test data, as there are wide variations of cells' appearance and shape at different cancerous statuses. This difficulty makes the classifier, though trained properly, often classify wrongly for cells that are underrepresented by the training dataset, eventually leading to a wrong screening result. To address it, we propose a new learning algorithm, called worse-case boosting, for classifiers effectively learning from under-representative datasets in cervical cell classification. The key idea is to learn more from worse-case data for which the classifier has a larger gradient norm compared to other training data, so these data are more likely to correspond to underrepresented data, by dynamically assigning them more training iterations and larger loss weights for boosting the generalizability of the classifier on underrepresented data. We achieve this idea by sampling worse-case data per the gradient norm information and then enhancing their loss values to update the classifier. We demonstrate the effectiveness of this new learning algorithm on two publicly available cervical cell classification datasets (the two largest ones to the best of our knowledge), and positive results (4% accuracy improvement) yield in the extensive experiments. The source codes are available at: https://github.com/YouyiSong/Worse-Case-Boosting.
Collapse
Affiliation(s)
- Youyi Song
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jing Zou
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Kup-Sze Choi
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Baiying Lei
- Marshall Laboratory of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen University, Shenzhen, China.
| | - Jing Qin
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
16
|
Yuan G, Zhai Y, Tang J, Zhou X. Selection of HBV key reactivation factors based on maximum information coefficient combined with cosine similarity. Technol Health Care 2024; 32:749-763. [PMID: 37393455 DOI: 10.3233/thc-230161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/03/2023]
Abstract
BACKGROUND Hepatitis B Virus (HBV) reactivation is the most common complication for patients with primary liver cancer (PLC) after radiotherapy. How to reduce the reactivation of HBV has been a hot topic in the study of postoperative radiotherapy for liver cancer. OBJECTIVE To find out the inducement of HBV reactivation, a feature selection algorithm (MIC-CS) using maximum information coefficient (MIC) combined with cosine similarity (CS) was proposed to screen the risk factors that may affect HBV reactivation. METHOD Firstly, different factors were coded and MIC between patients was calculated to acquire the association between different factors and HBV reactivation. Secondly, a cosine similarity algorithm was constructed to calculate the similarity relationship between different factors, thus removing redundant information. Finally, combined with the weight of the two, the potential risk factors were sorted and the key factors leading to HBV reactivation were selected. RESULTS The results indicated that HBV baseline, external boundary, TNM, KPS score, VD, AFP, and Child-Pugh could lead to HBV reactivation after radiotherapy. The classification model was constructed for the above factors, with the highest classification accuracy of 84% and the AUC value of 0.71. CONCLUSION Comparing multiple feature selection methods, the results showed that the effect of the MIC-CS was significantly better than MIM, CMIM, and mRMR, so it has a very broad application prospect.
Collapse
Affiliation(s)
- Gaoteng Yuan
- College of Computer and Information, Hohai University, Nanjing, Jiangsu, China
| | - Yi Zhai
- Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan, Shandong, China
- Shandong Provincial Key Laboratory of Computer Networks, Shandong Fundamental Research Center for Computer Science, Jinan, Shandong, China
| | - Jiansong Tang
- College of Computer and Information, Hohai University, Nanjing, Jiangsu, China
| | - Xiaofeng Zhou
- College of Computer and Information, Hohai University, Nanjing, Jiangsu, China
| |
Collapse
|
17
|
Naz H, Nijhawan R, Ahuja NJ, Saba T, Alamri FS, Rehman A. Micro-segmentation of retinal image lesions in diabetic retinopathy using energy-based fuzzy C-Means clustering (EFM-FCM). Microsc Res Tech 2024; 87:78-94. [PMID: 37681440 DOI: 10.1002/jemt.24413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 08/06/2023] [Accepted: 08/24/2023] [Indexed: 09/09/2023]
Abstract
Diabetic retinopathy (DR) is a prevalent cause of global visual impairment, contributing to approximately 4.8% of blindness cases worldwide as reported by the World Health Organization (WHO). The condition is characterized by pathological abnormalities in the retinal layer, including microaneurysms, vitreous hemorrhages, and exudates. Microscopic analysis of retinal images is crucial in diagnosing and treating DR. This article proposes a novel method for early DR screening using segmentation and unsupervised learning techniques. The approach integrates a neural network energy-based model into the Fuzzy C-Means (FCM) algorithm to enhance convergence criteria, aiming to improve the accuracy and efficiency of automated DR screening tools. The evaluation of results includes the primary dataset from the Shiva Netralaya Centre, IDRiD, and DIARETDB1. The performance of the proposed method is compared against FCM, EFCM, FLICM, and M-FLICM techniques, utilizing metrics such as accuracy in noiseless and noisy conditions and average execution time. The results showcase auspicious performance on both primary and secondary datasets, achieving accuracy rates of 99.03% in noiseless conditions and 93.13% in noisy images, with an average execution time of 16.1 s. The proposed method holds significant potential in medical image analysis and could pave the way for future advancements in automated DR diagnosis and management. RESEARCH HIGHLIGHTS: A novel approach is proposed in the article, integrating a neural network energy-based model into the FCM algorithm to enhance the convergence criteria and the accuracy of automated DR screening tools. By leveraging the microscopic characteristics of retinal images, the proposed method significantly improves the accuracy of lesion segmentation, facilitating early detection and monitoring of DR. The evaluation of the method's performance includes primary datasets from reputable sources such as the Shiva Netralaya Centre, IDRiD, and DIARETDB1, demonstrating its effectiveness in comparison to other techniques (FCM, EFCM, FLICM, and M-FLICM) in terms of accuracy in both noiseless and noisy conditions. It achieves impressive accuracy rates of 99.03% in noiseless conditions and 93.13% in noisy images, with an average execution time of 16.1 s.
Collapse
Affiliation(s)
- Huma Naz
- Department of Computer Science, University of Petroleum and Energy Studies, Dehradun, India
| | - Rahul Nijhawan
- Thapar Institute of Engineering and Technology, Patiala, Punjab, India
| | - Neelu Jyothi Ahuja
- Department of Computer Science, University of Petroleum and Energy Studies, Dehradun, India
| | - Tanzila Saba
- Artificial Intelligence and Data Analytics Lab, Prince Sultan University, Riyadh, Saudi Arabia
| | - Faten S Alamri
- Department of Mathematical Sciences, College of Science, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Amjad Rehman
- Artificial Intelligence and Data Analytics Lab, Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
18
|
Hu J, Qiu L, Wang H, Zhang J. Semi-supervised point consistency network for retinal artery/vein classification. Comput Biol Med 2024; 168:107633. [PMID: 37992471 DOI: 10.1016/j.compbiomed.2023.107633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 10/02/2023] [Accepted: 10/23/2023] [Indexed: 11/24/2023]
Abstract
Recent deep learning methods with convolutional neural networks (CNNs) have boosted advance prosperity of medical image analysis and expedited the automatic retinal artery/vein (A/V) classification. However, it is challenging for these CNN-based approaches in two aspects: (1) specific tubular structures and subtle variations in appearance, contrast, and geometry, which tend to be ignored in CNNs with network layer increasing; (2) limited well-labeled data for supervised segmentation of retinal vessels, which may hinder the effectiveness of deep learning methods. To address these issues, we propose a novel semi-supervised point consistency network (SPC-Net) for retinal A/V classification. SPC-Net consists of an A/V classification (AVC) module and a multi-class point consistency (MPC) module. The AVC module adopts an encoder-decoder segmentation network to generate the prediction probability map of A/V for supervised learning. The MPC module introduces point set representations to adaptively generate point set classification maps of the arteriovenous skeleton, which enjoys its prediction flexibility and consistency (i.e. point consistency) to effectively alleviate arteriovenous confusion. In addition, we propose a consistency regularization between the predicted A/V classification probability maps and point set representations maps for unlabeled data to explore the inherent segmentation perturbation of the point consistency, reducing the need for annotated data. We validate our method on two typical public datasets (DRIVE, HRF) and a private dataset (TR280) with different resolutions. Extensive qualitative and quantitative experimental results demonstrate the effectiveness of our proposed method for supervised and semi-supervised learning.
Collapse
Affiliation(s)
- Jingfei Hu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, Anhui, China
| | - Linwei Qiu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, Anhui, China
| | - Hua Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, Anhui, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China; Hefei Innovation Research Institute, Beihang University, Hefei, 230012, Anhui, China; Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, 100083, China.
| |
Collapse
|
19
|
Mahapatra S, Agrawal S, Mishro PK, Panda R, Dora L, Pachori RB. A Review on Retinal Blood Vessel Enhancement and Segmentation Techniques for Color Fundus Photography. Crit Rev Biomed Eng 2024; 52:41-69. [PMID: 37938183 DOI: 10.1615/critrevbiomedeng.2023049348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2023]
Abstract
The retinal image is a trusted modality in biomedical image-based diagnosis of many ophthalmologic and cardiovascular diseases. Periodic examination of the retina can help in spotting these abnormalities in the early stage. However, to deal with today's large population, computerized retinal image analysis is preferred over manual inspection. The precise extraction of the retinal vessel is the first and decisive step for clinical applications. Every year, many more articles are added to the literature that describe new algorithms for the problem at hand. The majority of the review article is restricted to a fairly small number of approaches, assessment indices, and databases. In this context, a comprehensive review of different vessel extraction methods is inevitable. It includes the development of a first-hand classification of these methods. A bibliometric analysis of these articles is also presented. The benefits and drawbacks of the most commonly used techniques are summarized. The primary challenges, as well as the scope of possible changes, are discussed. In order to make a fair comparison, numerous assessment indices are considered. The findings of this survey could provide a new path for researchers for further work in this domain.
Collapse
Affiliation(s)
- Sakambhari Mahapatra
- Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Sanjay Agrawal
- Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Pranaba K Mishro
- Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Rutuparna Panda
- Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Lingraj Dora
- Department of Electrical and Electronics Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Ram Bilas Pachori
- Department of Electrical Engineering, Indian Institute of Technology Indore, Indore, India
| |
Collapse
|
20
|
Popovic N, Ždralević M, Vujosevic S, Radunović M, Adžić Zečević A, Rovčanin Dragović I, Vukčević B, Popovic T, Radulović L, Vuković T, Eraković J, Lazović R, Radunović M. Retinal microvascular complexity as a putative biomarker of biological age: a pilot study. Biogerontology 2023; 24:971-985. [PMID: 37572202 DOI: 10.1007/s10522-023-10057-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 07/27/2023] [Indexed: 08/14/2023]
Abstract
Physiological changes associated with aging increase the risk for the development of age-related diseases. This increase is non-specific to the type of age-related disease, although each disease develops through a unique pathophysiologic mechanism. People who age at a faster rate develop age-related diseases earlier in their life. They have an older "biological age" compared to their "chronological age". Early detection of individuals with accelerated aging would allow timely intervention to postpone the onset of age-related diseases. This would increase their life expectancy and their length of good quality life. The goal of this study was to investigate whether retinal microvascular complexity could be used as a biomarker of biological age. Retinal images of 68 participants ages ranging from 19 to 82 years were collected in an observational cross-sectional study. Twenty of the old participants had age-related diseases such as hypertension, type 2 diabetes, and/or Alzheimer's dementia. The rest of the participants were healthy. Retinal images were captured by a hand-held, non-mydriatic fundus camera and quantification of the microvascular complexity was performed by using Sholl's, box-counting fractal, and lacunarity analysis. In the healthy subjects, increasing chronological age was associated with lower retinal microvascular complexity measured by Sholl's analysis. Decreased box-counting fractal dimension was present in old patients, and this decrease was 2.1 times faster in participants who had age-related diseases (p = 0.047). Retinal microvascular complexity could be a promising new biomarker of biological age. The data from this study is the first of this kind collected in Montenegro. It is freely available for use.
Collapse
Affiliation(s)
- Natasa Popovic
- Faculty of Medicine, University of Montenegro, Podgorica, Montenegro.
| | - Maša Ždralević
- Institute for Advanced Studies, University of Montenegro, Podgorica, Montenegro
| | - Stela Vujosevic
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy
- Eye Clinic, IRCCS MultiMedica, Milan, Italy
| | | | - Antoaneta Adžić Zečević
- Faculty of Medicine, University of Montenegro, Podgorica, Montenegro
- Clinical Center of Montenegro, Podgorica, Montenegro
| | | | | | - Tomo Popovic
- Faculty for Information Systems and Technologies, University of Donja Gorica, Podgorica, Montenegro
| | - Ljiljana Radulović
- Faculty of Medicine, University of Montenegro, Podgorica, Montenegro
- Clinical Center of Montenegro, Podgorica, Montenegro
| | | | | | - Ranko Lazović
- Faculty of Medicine, University of Montenegro, Podgorica, Montenegro
- Clinical Center of Montenegro, Podgorica, Montenegro
| | - Miodrag Radunović
- Faculty of Medicine, University of Montenegro, Podgorica, Montenegro
- Clinical Center of Montenegro, Podgorica, Montenegro
| |
Collapse
|
21
|
Ryu J, Rehman MU, Nizami IF, Chong KT. SegR-Net: A deep learning framework with multi-scale feature fusion for robust retinal vessel segmentation. Comput Biol Med 2023; 163:107132. [PMID: 37343468 DOI: 10.1016/j.compbiomed.2023.107132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 05/12/2023] [Accepted: 06/04/2023] [Indexed: 06/23/2023]
Abstract
Retinal vessel segmentation is an important task in medical image analysis and has a variety of applications in the diagnosis and treatment of retinal diseases. In this paper, we propose SegR-Net, a deep learning framework for robust retinal vessel segmentation. SegR-Net utilizes a combination of feature extraction and embedding, deep feature magnification, feature precision and interference, and dense multiscale feature fusion to generate accurate segmentation masks. The model consists of an encoder module that extracts high-level features from the input images and a decoder module that reconstructs the segmentation masks by combining features from the encoder module. The encoder module consists of a feature extraction and embedding block that enhances by dense multiscale feature fusion, followed by a deep feature magnification block that magnifies the retinal vessels. To further improve the quality of the extracted features, we use a group of two convolutional layers after each DFM block. In the decoder module, we utilize a feature precision and interference block and a dense multiscale feature fusion block (DMFF) to combine features from the encoder module and reconstruct the segmentation mask. We also incorporate data augmentation and pre-processing techniques to improve the generalization of the trained model. Experimental results on three fundus image publicly available datasets (CHASE_DB1, STARE, and DRIVE) demonstrate that SegR-Net outperforms state-of-the-art models in terms of accuracy, sensitivity, specificity, and F1 score. The proposed framework can provide more accurate and more efficient segmentation of retinal blood vessels in comparison to the state-of-the-art techniques, which is essential for clinical decision-making and diagnosis of various eye diseases.
Collapse
Affiliation(s)
- Jihyoung Ryu
- Electronics and Telecommunications Research Institute, 176-11 Cheomdan Gwagi-ro, Buk-gu, Gwangju 61012, Republic of Korea.
| | - Mobeen Ur Rehman
- Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju 54896, Republic of Korea.
| | - Imran Fareed Nizami
- Department of Electrical Engineering, Bahria University, Islamabad, Pakistan.
| | - Kil To Chong
- Electronics and Telecommunications Research Institute, 176-11 Cheomdan Gwagi-ro, Buk-gu, Gwangju 61012, Republic of Korea; Advances Electronics and Information Research Center, Jeonbuk National University, Jeonju 54896, Republic of Korea.
| |
Collapse
|
22
|
Freiberg J, Welikala RA, Rovelt J, Owen CG, Rudnicka AR, Kolko M, Barman SA. Automated analysis of vessel morphometry in retinal images from a Danish high street optician setting. PLoS One 2023; 18:e0290278. [PMID: 37616264 PMCID: PMC10449151 DOI: 10.1371/journal.pone.0290278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Accepted: 06/29/2023] [Indexed: 08/26/2023] Open
Abstract
PURPOSE To evaluate the test performance of the QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) software in detecting retinal features from retinal images captured by health care professionals in a Danish high street optician chain, compared with test performance from other large population studies (i.e., UK Biobank) where retinal images were captured by non-experts. METHOD The dataset FOREVERP (Finding Ophthalmic Risk and Evaluating the Value of Eye exams and their predictive Reliability, Pilot) contains retinal images obtained from a Danish high street optician chain. The QUARTZ algorithm utilizes both image processing and machine learning methods to determine retinal image quality, vessel segmentation, vessel width, vessel classification (arterioles or venules), and optic disc localization. Outcomes were evaluated by metrics including sensitivity, specificity, and accuracy and compared to human expert ground truths. RESULTS QUARTZ's performance was evaluated on a subset of 3,682 images from the FOREVERP database. 80.55% of the FOREVERP images were labelled as being of adequate quality compared to 71.53% of UK Biobank images, with a vessel segmentation sensitivity of 74.64% and specificity of 98.41% (FOREVERP) compared with a sensitivity of 69.12% and specificity of 98.88% (UK Biobank). The mean (± standard deviation) vessel width of the ground truth was 16.21 (4.73) pixels compared to that predicted by QUARTZ of 17.01 (4.49) pixels, resulting in a difference of -0.8 (1.96) pixels. The differences were stable across a range of vessels. The detection rate for optic disc localisation was similar for the two datasets. CONCLUSION QUARTZ showed high performance when evaluated on the FOREVERP dataset, and demonstrated robustness across datasets, providing validity to direct comparisons and pooling of retinal feature measures across data sources.
Collapse
Affiliation(s)
- Josefine Freiberg
- Department of Drug Design and Pharmacology, University of Copenhagen, Copenhagen, Denmark
| | - Roshan A. Welikala
- School of Computer Science and Mathematics, Kingston University, Surrey, United Kingdom
| | - Jens Rovelt
- Department of Drug Design and Pharmacology, University of Copenhagen, Copenhagen, Denmark
| | - Christopher G. Owen
- Population Health Research Institute, St. George’s, University of London, London, United Kingdom
| | - Alicja R. Rudnicka
- Population Health Research Institute, St. George’s, University of London, London, United Kingdom
| | - Miriam Kolko
- Department of Drug Design and Pharmacology, University of Copenhagen, Copenhagen, Denmark
- Department of Ophthalmology, Copenhagen University Hospital, Rigshospitalet, Glostrup, Copenhagen, Denmark
| | - Sarah A. Barman
- School of Computer Science and Mathematics, Kingston University, Surrey, United Kingdom
| | | |
Collapse
|
23
|
Abdushkour H, Soomro TA, Ali A, Ali Jandan F, Jelinek H, Memon F, Althobiani F, Mohammed Ghonaim S, Irfan M. Enhancing fine retinal vessel segmentation: Morphological reconstruction and double thresholds filtering strategy. PLoS One 2023; 18:e0288792. [PMID: 37467245 DOI: 10.1371/journal.pone.0288792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 07/05/2023] [Indexed: 07/21/2023] Open
Abstract
Eye diseases such as diabetic retinopathy are progressive with various changes in the retinal vessels, and it is difficult to analyze the disease for future treatment. There are many computerized algorithms implemented for retinal vessel segmentation, but the tiny vessels drop off, impacting the performance of the overall algorithms. This research work contains the new image processing techniques such as enhancement filters, coherence filters and binary thresholding techniques to handle the different color retinal fundus image problems to achieve a vessel image that is well-segmented, and the proposed algorithm has improved performance over existing work. Our developed technique incorporates morphological techniques to address the center light reflex issue. Additionally, to effectively resolve the problem of insufficient and varying contrast, our developed technique employs homomorphic methods and Wiener filtering. Coherent filters are used to address the coherence issue of the retina vessels, and then a double thresholding technique is applied with image reconstruction to achieve a correctly segmented vessel image. The results of our developed technique were evaluated using the STARE and DRIVE datasets and it achieves an accuracy of about 0.96 and a sensitivity of 0.81. The performance obtained from our proposed method proved the capability of the method which can be used by ophthalmology experts to diagnose ocular abnormalities and recommended for further treatment.
Collapse
Affiliation(s)
- Hesham Abdushkour
- Nautical Science Deptartment, Faculty of Maritime, King Abdul Aziz University, Jeddah, Saudia Arabia
| | - Toufique A Soomro
- Department of Electronic Engineering, Quaid-e-Awam University of Engineering, Science and Technology Larkana Campus, Sukkur, Pakistan
| | - Ahmed Ali
- Eletrical Engineering Department, Sukkur IBA University, Sukkur, Pakistan
| | - Fayyaz Ali Jandan
- Eletrical Engineering Department, Quaid-e-Awam University of Engineering, Science and Technology Larkana Campus, Sukkur, Pakistan
| | - Herbert Jelinek
- Health Engineering Innovation Center and biotechnology Center, Khalifa University, Abu Dhabi, UAE
| | - Farida Memon
- Department of Electronic Engineering, Mehran University, Janshoro, Jamshoro, Pakistan
| | - Faisal Althobiani
- Marine Engineering Department, Faculty of Maritime, King Abdul Aziz University, Jeddah, Saudia Arabia
| | - Saleh Mohammed Ghonaim
- Marine Engineering Department, Faculty of Maritime, King Abdul Aziz University, Jeddah, Saudia Arabia
| | - Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University, Najran, Saudi Arabia
| |
Collapse
|
24
|
Retinal image blood vessel classification using hybrid deep learning in cataract diseased fundus images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
|
25
|
Xue CC, Li C, Hu JF, Wei CC, Wang H, Ahemaitijiang K, Zhang Q, Chen DN, Zhang C, Li F, Zhang J, Jonas JB, Wang YX. Retinal vessel caliber and tortuosity and prediction of 5-year incidence of hypertension. J Hypertens 2023; 41:830-837. [PMID: 36883461 DOI: 10.1097/hjh.0000000000003406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Abstract
PURPOSE With arterial hypertension as a global risk factor for cerebrovascular and cardiovascular diseases, we examined whether retinal blood vessel caliber and tortuosity assessed by a vessel-constraint network model can predict the incidence of hypertension. METHODS The community-based prospective study included 9230 individuals who were followed for 5 years. Ocular fundus photographs taken at baseline were analyzed by a vessel-constraint network model. RESULTS Within the 5-year follow-up, 1279 (18.8%) and 474 (7.0%) participants out of 6813 individuals free of hypertension at baseline developed hypertension and severe hypertension, respectively. In multivariable analysis, a higher incidence of hypertension was related to a narrower retinal arteriolar diameter ( P < 0.001), wider venular diameter ( P = 0.005), and a smaller arteriole-to-venule diameter ratio ( P < 0.001) at baseline. Individuals with the 5% narrowest arteriole or the 5% widest venule diameter had a 17.1-fold [95% confidence interval (CI):7.9, 37.2] or 2.3-fold (95% CI: 1.4, 3.7) increased risk for developing hypertension, as compared with those with the 5% widest arteriole or the 5% narrowest venule. The area under the receiver operator characteristic curve for predicting the 5-year incidence of hypertension and severe hypertension was 0.791 (95% CI: 0.778, 0.804) and 0.839 (95% CI: 0.821, 0.856), respectively. Although the venular tortuosity was positively associated with the presence of hypertension at baseline ( P = 0.01), neither arteriolar tortuosity nor venular tortuosity was associated with incident hypertension (both P ≥ 0.10). CONCLUSION AND RELEVANCE Narrower retinal arterioles and wider venules indicate an increased risk for incident hypertension within 5 years, while tortuous retinal venules are associated with the presence rather than the incidence of hypertension. The automatic assessment of retinal vessel features performed well in identifying individuals at risk of developing hypertension.
Collapse
Affiliation(s)
- Can C Xue
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory
- Department of Ophthalmology, Peking University Third Hospital
| | - Cai Li
- School of Biological Science and Medical Engineering, Beihang University, Beijing
- Hefei Innovation Research Institute, Beihang University, Hefei
| | - Jing F Hu
- School of Biological Science and Medical Engineering, Beihang University, Beijing
- Hefei Innovation Research Institute, Beihang University, Hefei
| | - Chuan C Wei
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory
| | - Hua Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing
- Hefei Innovation Research Institute, Beihang University, Hefei
| | - Kailimujiang Ahemaitijiang
- School of Biological Science and Medical Engineering, Beihang University, Beijing
- Hefei Innovation Research Institute, Beihang University, Hefei
| | - Qi Zhang
- Eye Center, the 2nd Affiliated Hospital, Medical College of Zhejiang University, Hangzhou
| | - Dong N Chen
- Department of Physical Examination, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chun Zhang
- Department of Ophthalmology, Peking University Third Hospital
| | - Fan Li
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing
- Hefei Innovation Research Institute, Beihang University, Hefei
| | - Jost B Jonas
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory
- Department of Ophthalmology, Medical Faculty Mannheim, Heidelberg University, Mannheim
- Privatpraxis Prof Jonas und Dr Panda-Jonas, Heidelberg, Germany
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Ya X Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory
| |
Collapse
|
26
|
Kv R, Prasad K, Peralam Yegneswaran P. Segmentation and Classification Approaches of Clinically Relevant Curvilinear Structures: A Review. J Med Syst 2023; 47:40. [PMID: 36971852 PMCID: PMC10042761 DOI: 10.1007/s10916-023-01927-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 02/25/2023] [Indexed: 03/29/2023]
Abstract
Detection of curvilinear structures from microscopic images, which help the clinicians to make an unambiguous diagnosis is assuming paramount importance in recent clinical practice. Appearance and size of dermatophytic hyphae, keratitic fungi, corneal and retinal vessels vary widely making their automated detection cumbersome. Automated deep learning methods, endowed with superior self-learning capacity, have superseded the traditional machine learning methods, especially in complex images with challenging background. Automatic feature learning ability using large input data with better generalization and recognition capability, but devoid of human interference and excessive pre-processing, is highly beneficial in the above context. Varied attempts have been made by researchers to overcome challenges such as thin vessels, bifurcations and obstructive lesions in retinal vessel detection as revealed through several publications reviewed here. Revelations of diabetic neuropathic complications such as tortuosity, changes in the density and angles of the corneal fibers have been successfully sorted in many publications reviewed here. Since artifacts complicate the images and affect the quality of analysis, methods addressing these challenges have been described. Traditional and deep learning methods, that have been adapted and published between 2015 and 2021 covering retinal vessels, corneal nerves and filamentous fungi have been summarized in this review. We find several novel and meritorious ideas and techniques being put to use in the case of retinal vessel segmentation and classification, which by way of cross-domain adaptation can be utilized in the case of corneal and filamentous fungi also, making suitable adaptations to the challenges to be addressed.
Collapse
Affiliation(s)
- Rajitha Kv
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India.
| | - Prakash Peralam Yegneswaran
- Department of Microbiology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India
| |
Collapse
|
27
|
Kerimkhan B, Nedzved A, Zhumadillayeva A, Dyussekeyev K, Uskenbayeva G, Sultanova B, Rzayeva L. Automation of flow analysis in scleral vessels based on descriptive-associative algorithms. Sci Rep 2023; 13:4650. [PMID: 36944724 PMCID: PMC10030867 DOI: 10.1038/s41598-023-31866-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 03/20/2023] [Indexed: 03/23/2023] Open
Abstract
Blood flow reflects the eye's health and is disrupted in many diseases. Many pathological processes take place at the cellular level like as microcirculation of blood in vessels, and the processing of medical images is a difficult recognition task. Existing techniques for measuring blood flow are limited due to the complex assumptions, equipment and calculations requirements. In this paper, we propose a method for determining the blood flow characteristics in eye conjunctiva vessels, such as linear and volumetric blood speed and topological characteristics of the vascular net. The method preprocesses the video to improve the conditions of analysis and then builds an integral optical flow for definition of flow dynamical characteristic of eye vessels. These characteristics make it possible to determine changes in blood flow in eye vessels. We show the efficiency of our method in natural eye vessel scenes. The research provides valuable insights to novices with limited experience in the diagnosis and can serve as a valuable tool for experienced medical professionals.
Collapse
Affiliation(s)
- Bekzhan Kerimkhan
- Faculty of Information Technologies, L.N. Gumilyov Eurasian National University, Astana, 010000, Kazakhstan
| | - Alexander Nedzved
- Department of Computer Applications and Systems, Belarusian State University, 220004, Minsk, Belarus
| | - Ainur Zhumadillayeva
- Faculty of Information Technologies, L.N. Gumilyov Eurasian National University, Astana, 010000, Kazakhstan.
| | - Kanagat Dyussekeyev
- Faculty of Information Technologies, L.N. Gumilyov Eurasian National University, Astana, 010000, Kazakhstan
| | - Gulzhan Uskenbayeva
- Faculty of Information Technologies, L.N. Gumilyov Eurasian National University, Astana, 010000, Kazakhstan
| | - Bakhyt Sultanova
- Faculty of Innovative Technologies, Karaganda Technical University, Karaganda, 100000, Kazakhstan
| | - Leila Rzayeva
- Department of Computer Engineering, Astana IT University, Astana, 010000, Kazakhstan
| |
Collapse
|
28
|
End-to-End Automatic Classification of Retinal Vessel Based on Generative Adversarial Networks with Improved U-Net. Diagnostics (Basel) 2023; 13:diagnostics13061148. [PMID: 36980456 PMCID: PMC10047448 DOI: 10.3390/diagnostics13061148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/07/2023] [Accepted: 03/13/2023] [Indexed: 03/19/2023] Open
Abstract
The retinal vessels in the human body are the only ones that can be observed directly by non-invasive imaging techniques. Retinal vessel morphology and structure are the important objects of concern for physicians in the early diagnosis and treatment of related diseases. The classification of retinal vessels has important guiding significance in the basic stage of diagnostic treatment. This paper proposes a novel method based on generative adversarial networks with improved U-Net, which can achieve synchronous automatic segmentation and classification of blood vessels by an end-to-end network. The proposed method avoids the dependency of the segmentation results in the multiple classification tasks. Moreover, the proposed method builds on an accurate classification of arteries and veins while also classifying arteriovenous crossings. The validity of the proposed method is evaluated on the RITE dataset: the accuracy of image comprehensive classification reaches 96.87%. The sensitivity and specificity of arteriovenous classification reach 91.78% and 97.25%. The results verify the effectiveness of the proposed method and show the competitive classification performance.
Collapse
|
29
|
Challoob M, Gao Y, Busch A, Nikzad M. Separable Paravector Orientation Tensors for Enhancing Retinal Vessels. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:880-893. [PMID: 36331638 DOI: 10.1109/tmi.2022.3219436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Robust detection of retinal vessels remains an unsolved research problem, particularly in handling the intrinsic real-world challenges of highly imbalanced contrast between thick vessels and thin ones, inhomogeneous background regions, uneven illumination, and complex geometries of crossing/bifurcations. This paper presents a new separable paravector orientation tensor that addresses these difficulties by characterizing the enhancement of retinal vessels to be dependent on a nonlinear scale representation, invariant to changes in contrast and lighting, responsive for symmetric patterns, and fitted with elliptical cross-sections. The proposed method is built on projecting vessels as a 3D paravector valued function rotated in an alpha quarter domain, providing geometrical, structural, symmetric, and energetic features. We introduce an innovative symmetrical inhibitory scheme that incorporates paravector features for producing a set of directional contrast-independent elongated-like patterns reconstructing vessel tree in orientation tensors. By fitting constraint elliptical volumes via eigensystem analysis, the final vessel tree is produced with a strong and uniform response preserving various vessel features. The validation of proposed method on clinically relevant retinal images with high-quality results, shows its excellent performance compared to the state-of-the-art benchmarks and the second human observers.
Collapse
|
30
|
Liu M, Wang Z, Li H, Wu P, Alsaadi FE, Zeng N. AA-WGAN: Attention augmented Wasserstein generative adversarial network with application to fundus retinal vessel segmentation. Comput Biol Med 2023; 158:106874. [PMID: 37019013 DOI: 10.1016/j.compbiomed.2023.106874] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 03/15/2023] [Accepted: 03/30/2023] [Indexed: 04/03/2023]
Abstract
In this paper, a novel attention augmented Wasserstein generative adversarial network (AA-WGAN) is proposed for fundus retinal vessel segmentation, where a U-shaped network with attention augmented convolution and squeeze-excitation module is designed to serve as the generator. In particular, the complex vascular structures make some tiny vessels hard to segment, while the proposed AA-WGAN can effectively handle such imperfect data property, which is competent in capturing the dependency among pixels in the whole image to highlight the regions of interests via the applied attention augmented convolution. By applying the squeeze-excitation module, the generator is able to pay attention to the important channels of the feature maps, and the useless information can be suppressed as well. In addition, gradient penalty method is adopted in the WGAN backbone to alleviate the phenomenon of generating large amounts of repeated images due to excessive concentration on accuracy. The proposed model is comprehensively evaluated on three datasets DRIVE, STARE, and CHASE_DB1, and the results show that the proposed AA-WGAN is a competitive vessel segmentation model as compared with several other advanced models, which obtains the accuracy of 96.51%, 97.19% and 96.94% on each dataset, respectively. The effectiveness of the applied important components is validated by ablation study, which also endows the proposed AA-WGAN with considerable generalization ability.
Collapse
|
31
|
Wisaeng K. Retinal Blood-Vessel Extraction Using Weighted Kernel Fuzzy C-Means Clustering and Dilation-Based Functions. Diagnostics (Basel) 2023; 13:diagnostics13030342. [PMID: 36766446 PMCID: PMC9914389 DOI: 10.3390/diagnostics13030342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 01/04/2023] [Accepted: 01/09/2023] [Indexed: 01/19/2023] Open
Abstract
Automated blood-vessel extraction is essential in diagnosing Diabetic Retinopathy (DR) and other eye-related diseases. However, the traditional methods for extracting blood vessels tend to provide low accuracy when dealing with difficult situations, such as extracting both micro and large blood vessels simultaneously with low-intensity images and blood vessels with DR. This paper proposes a complete preprocessing method to enhance original retinal images before transferring the enhanced images to a novel blood-vessel extraction method by a combined three extraction stages. The first stage focuses on the fast extraction of retinal blood vessels using Weighted Kernel Fuzzy C-Means (WKFCM) Clustering to draw the vessel feature from the retinal background. The second stage focuses on the accuracy of full-size images to achieve regional vessel feature recognition of large and micro blood vessels and to minimize false extraction. This stage implements the mathematical dilation operator from a trained model called Dilation-Based Function (DBF). Finally, an optimal parameter threshold is empirically determined in the third stage to remove non-vessel features in the binary image and improve the overall vessel extraction results. According to evaluations of the method via the datasets DRIVE, STARE, and DiaretDB0, the proposed WKFCM-DBF method achieved sensitivities, specificities, and accuracy performances of 98.12%, 98.20%, and 98.16%, 98.42%, 98.80%, and 98.51%, and 98.89%, 98.10%, and 98.09%, respectively.
Collapse
Affiliation(s)
- Kittipol Wisaeng
- Technology and Business Information System Unit, Mahasarakham Business School, Mahasarakham University, Mahasarakham 44150, Thailand
| |
Collapse
|
32
|
Oliveira B, Torres HR, Morais P, Veloso F, Baptista AL, Fonseca JC, Vilaça JL. A multi-task convolutional neural network for classification and segmentation of chronic venous disorders. Sci Rep 2023; 13:761. [PMID: 36641527 PMCID: PMC9840616 DOI: 10.1038/s41598-022-27089-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 12/26/2022] [Indexed: 01/16/2023] Open
Abstract
Chronic Venous Disorders (CVD) of the lower limbs are one of the most prevalent medical conditions, affecting 35% of adults in Europe and North America. Due to the exponential growth of the aging population and the worsening of CVD with age, it is expected that the healthcare costs and the resources needed for the treatment of CVD will increase in the coming years. The early diagnosis of CVD is fundamental in treatment planning, while the monitoring of its treatment is fundamental to assess a patient's condition and quantify the evolution of CVD. However, correct diagnosis relies on a qualitative approach through visual recognition of the various venous disorders, being time-consuming and highly dependent on the physician's expertise. In this paper, we propose a novel automatic strategy for the joint segmentation and classification of CVDs. The strategy relies on a multi-task deep learning network, denominated VENet, that simultaneously solves segmentation and classification tasks, exploiting the information of both tasks to increase learning efficiency, ultimately improving their performance. The proposed method was compared against state-of-the-art strategies in a dataset of 1376 CVD images. Experiments showed that the VENet achieved a classification performance of 96.4%, 96.4%, and 97.2% for accuracy, precision, and recall, respectively, and a segmentation performance of 75.4%, 76.7.0%, 76.7% for the Dice coefficient, precision, and recall, respectively. The joint formulation increased the robustness of both tasks when compared to the conventional classification or segmentation strategies, proving its added value, mainly for the segmentation of small lesions.
Collapse
Affiliation(s)
- Bruno Oliveira
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Campus de Gualtar, 4710-057, Braga, Portugal. .,ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal. .,Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal. .,2Ai - School of Technology, IPCA, Barcelos, Portugal. .,LASI-Associate Laboratory of Intelligent Systems, 4800-058, Guimarães, Portugal.
| | - Helena R Torres
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Campus de Gualtar, 4710-057, Braga, Portugal.,ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal.,Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal.,2Ai - School of Technology, IPCA, Barcelos, Portugal.,LASI-Associate Laboratory of Intelligent Systems, 4800-058, Guimarães, Portugal
| | - Pedro Morais
- 2Ai - School of Technology, IPCA, Barcelos, Portugal.,LASI-Associate Laboratory of Intelligent Systems, 4800-058, Guimarães, Portugal
| | - Fernando Veloso
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Campus de Gualtar, 4710-057, Braga, Portugal.,ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal.,2Ai - School of Technology, IPCA, Barcelos, Portugal.,LASI-Associate Laboratory of Intelligent Systems, 4800-058, Guimarães, Portugal.,Department of Mechanical Engineering, School of Engineering, University of Minho, Guimarães, Portugal
| | | | - Jaime C Fonseca
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal.,LASI-Associate Laboratory of Intelligent Systems, 4800-058, Guimarães, Portugal
| | - João L Vilaça
- 2Ai - School of Technology, IPCA, Barcelos, Portugal.,LASI-Associate Laboratory of Intelligent Systems, 4800-058, Guimarães, Portugal
| |
Collapse
|
33
|
Zou H, Shi S, Yang X, Ma J, Fan Q, Chen X, Wang Y, Zhang M, Song J, Jiang Y, Li L, He X, Jhanji V, Wang S, Song M, Wang Y. Identification of ocular refraction based on deep learning algorithm as a novel retinoscopy method. Biomed Eng Online 2022; 21:87. [PMID: 36528597 PMCID: PMC9758840 DOI: 10.1186/s12938-022-01057-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 12/05/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND The evaluation of refraction is indispensable in ophthalmic clinics, generally requiring a refractor or retinoscopy under cycloplegia. Retinal fundus photographs (RFPs) supply a wealth of information related to the human eye and might provide a promising approach that is more convenient and objective. Here, we aimed to develop and validate a fusion model-based deep learning system (FMDLS) to identify ocular refraction via RFPs and compare with the cycloplegic refraction. In this population-based comparative study, we retrospectively collected 11,973 RFPs from May 1, 2020 to November 20, 2021. The performance of the regression models for sphere and cylinder was evaluated using mean absolute error (MAE). The accuracy, sensitivity, specificity, area under the receiver operating characteristic curve, and F1-score were used to evaluate the classification model of the cylinder axis. RESULTS Overall, 7873 RFPs were retained for analysis. For sphere and cylinder, the MAE values between the FMDLS and cycloplegic refraction were 0.50 D and 0.31 D, representing an increase of 29.41% and 26.67%, respectively, when compared with the single models. The correlation coefficients (r) were 0.949 and 0.807, respectively. For axis analysis, the accuracy, specificity, sensitivity, and area under the curve value of the classification model were 0.89, 0.941, 0.882, and 0.814, respectively, and the F1-score was 0.88. CONCLUSIONS The FMDLS successfully identified the ocular refraction in sphere, cylinder, and axis, and showed good agreement with the cycloplegic refraction. The RFPs can provide not only comprehensive fundus information but also the refractive state of the eye, highlighting their potential clinical value.
Collapse
Affiliation(s)
- Haohan Zou
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Shenda Shi
- grid.31880.320000 0000 8780 1230School of Computer Science, School of National Pilot Software Engineering, Beijing University of Posts and Telecommunications, 10 Xitucheng Road, Hai-Dian District, Beijing, 100876 China ,HuaHui Jian AI Tech Ltd., Tianjin, China
| | - Xiaoyan Yang
- grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China ,grid.412729.b0000 0004 1798 646XTianjin Eye Hospital Optometric Center, Tianjin, China
| | - Jiaonan Ma
- grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Qian Fan
- grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Xuan Chen
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Yibing Wang
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Mingdong Zhang
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Jiaxin Song
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Yanglin Jiang
- grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China ,grid.412729.b0000 0004 1798 646XTianjin Eye Hospital Optometric Center, Tianjin, China
| | - Lihua Li
- grid.412729.b0000 0004 1798 646XTianjin Eye Hospital Optometric Center, Tianjin, China
| | - Xin He
- HuaHui Jian AI Tech Ltd., Tianjin, China
| | - Vishal Jhanji
- grid.21925.3d0000 0004 1936 9000UPMC Eye Center, University of Pittsburgh School of Medicine, Pittsburgh, PA USA
| | - Shengjin Wang
- HuaHui Jian AI Tech Ltd., Tianjin, China ,grid.12527.330000 0001 0662 3178Department of Electronic Engineering, Tsinghua University, Beijing, China
| | - Meina Song
- grid.31880.320000 0000 8780 1230School of Computer Science, School of National Pilot Software Engineering, Beijing University of Posts and Telecommunications, 10 Xitucheng Road, Hai-Dian District, Beijing, 100876 China ,HuaHui Jian AI Tech Ltd., Tianjin, China
| | - Yan Wang
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China ,grid.216938.70000 0000 9878 7032Nankai University Eye Institute, Nankai University, Tianjin, China
| |
Collapse
|
34
|
Li H, Tang Z, Nan Y, Yang G. Human treelike tubular structure segmentation: A comprehensive review and future perspectives. Comput Biol Med 2022; 151:106241. [PMID: 36379190 DOI: 10.1016/j.compbiomed.2022.106241] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 09/16/2022] [Accepted: 10/22/2022] [Indexed: 12/27/2022]
Abstract
Various structures in human physiology follow a treelike morphology, which often expresses complexity at very fine scales. Examples of such structures are intrathoracic airways, retinal blood vessels, and hepatic blood vessels. Large collections of 2D and 3D images have been made available by medical imaging modalities such as magnetic resonance imaging (MRI), computed tomography (CT), Optical coherence tomography (OCT) and ultrasound in which the spatial arrangement can be observed. Segmentation of these structures in medical imaging is of great importance since the analysis of the structure provides insights into disease diagnosis, treatment planning, and prognosis. Manually labelling extensive data by radiologists is often time-consuming and error-prone. As a result, automated or semi-automated computational models have become a popular research field of medical imaging in the past two decades, and many have been developed to date. In this survey, we aim to provide a comprehensive review of currently publicly available datasets, segmentation algorithms, and evaluation metrics. In addition, current challenges and future research directions are discussed.
Collapse
Affiliation(s)
- Hao Li
- National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, United Kingdom; Department of Bioengineering, Faculty of Engineering, Imperial College London, London, United Kingdom
| | - Zeyu Tang
- National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, United Kingdom; Department of Bioengineering, Faculty of Engineering, Imperial College London, London, United Kingdom
| | - Yang Nan
- National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Guang Yang
- National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, United Kingdom; Royal Brompton Hospital, London, United Kingdom.
| |
Collapse
|
35
|
DuPont M, Hunsicker J, Shirley S, Warriner W, Rowland A, Taylor R, DuPont M, Lagatuz M, Yilmaz T, Foderaro A, Lahm T, Ventetuolo CE, Grant MB. Comparison of Retinal Imaging Techniques in Individuals with Pulmonary Artery Hypertension Using Vessel Generation Analysis. Life (Basel) 2022; 12:1985. [PMID: 36556350 PMCID: PMC9781977 DOI: 10.3390/life12121985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 11/16/2022] [Accepted: 11/23/2022] [Indexed: 11/29/2022] Open
Abstract
(1) Background: Retinal vascular imaging plays an essential role in diagnosing and managing chronic diseases such as diabetic retinopathy, sickle cell retinopathy, and systemic hypertension. Previously, we have shown that individuals with pulmonary arterial hypertension (PAH), a rare disorder, exhibit unique retinal vascular changes as seen using fluorescein angiography (FA) and that these changes correlate with PAH severity. This study aimed to determine if color fundus (CF) imaging could garner identical retinal information as previously seen using FA images in individuals with PAH. (2) Methods: VESGEN, computer software which provides detailed vascular patterns, was used to compare manual segmentations of FA to CF imaging in PAH subjects (n = 9) followed by deep learning (DL) processing of CF imaging to increase the speed of analysis and facilitate a noninvasive clinical translation. (3) Results: When manual segmentation of FA and CF images were compared using VESGEN analysis, both showed identical tortuosity and vessel area density measures. This remained true even when separating images based on arterial trees only. However, this was not observed with microvessels. DL segmentation when compared to manual segmentation of CF images showed similarities in vascular structure as defined by fractal dimension. Similarities were lost for tortuosity and vessel area density when comparing manual CF imaging to DL imaging. (4) Conclusions: Noninvasive imaging such as CF can be used with VESGEN to provide an accurate and safe assessment of retinal vascular changes in individuals with PAH. In addition to providing insight into possible future clinical translational use.
Collapse
Affiliation(s)
- Mariana DuPont
- Department of Optometry and Vision Science, School of Optometry, University of Alabama at Birmingham, Birmingham, AL 35294, USA
| | - John Hunsicker
- Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, AL 35294, USA
| | - Simona Shirley
- Department of Political Science and Public Administration, University of Alabama at Birmingham, Birmingham, AL 35294, USA
| | - William Warriner
- Research Computing, University of Alabama at Birmingham, Birmingham, AL 35294, USA
| | - Annabelle Rowland
- Heersink School of Medicine, University of Alabama at Birmingham, Birmingham, AL 35294, USA
| | - Reddhyia Taylor
- Department of Osteopathic Medicine, The Philadelphia College of Osteopathic Medicine, Philadelphia, PA 19131, USA
| | - Michael DuPont
- Department of Optometry and Vision Science, School of Optometry, University of Alabama at Birmingham, Birmingham, AL 35294, USA
| | - Mark Lagatuz
- Redline Performance Solutions, Ames Research Center, National Aeronautics and Space Administration, Moffett Field, Mountain View, CA 94043, USA
| | - Taygan Yilmaz
- Division of Ophthalmology, Department of Surgery, Alpert Medical School of Brown University, Providence, RI 02903, USA
| | - Andrew Foderaro
- Division of Pulmonary, Critical Care and Sleep Medicine, Department of Medicine, Alpert Medical School of Brown University, Providence, RI 02903, USA
| | - Tim Lahm
- Department of Medicine, Division of Pulmonary, Critical Care and Sleep Medicine, National Jewish Health, Denver, CO 80206, USA
- Department of Medicine, Division of Pulmonary Sciences and Critical Care Medicine, University of Colorado, Aurora, CO 80045, USA
- Rocky Mountain Regional VA Medical Center, Aurora, CO 80045, USA
| | - Corey E. Ventetuolo
- Department of Health Services, Policy and Practice, Brown University School of Public Health, Providence, RI 02903, USA
| | - Maria B. Grant
- Department of Optometry and Vision Science, School of Optometry, University of Alabama at Birmingham, Birmingham, AL 35294, USA
| |
Collapse
|
36
|
RADCU-Net: residual attention and dual-supervision cascaded U-Net for retinal blood vessel segmentation. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-022-01715-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
37
|
Yi Y, Guo C, Hu Y, Zhou W, Wang W. BCR-UNet: Bi-directional ConvLSTM residual U-Net for retinal blood vessel segmentation. Front Public Health 2022; 10:1056226. [PMID: 36483248 PMCID: PMC9722738 DOI: 10.3389/fpubh.2022.1056226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 11/04/2022] [Indexed: 11/23/2022] Open
Abstract
Background High precision segmentation of retinal blood vessels from retinal images is a significant step for doctors to diagnose many diseases such as glaucoma and cardiovascular diseases. However, at the peripheral region of vessels, previous U-Net-based segmentation methods failed to significantly preserve the low-contrast tiny vessels. Methods For solving this challenge, we propose a novel network model called Bi-directional ConvLSTM Residual U-Net (BCR-UNet), which takes full advantage of U-Net, Dropblock, Residual convolution and Bi-directional ConvLSTM (BConvLSTM). In this proposed BCR-UNet model, we propose a novel Structured Dropout Residual Block (SDRB) instead of using the original U-Net convolutional block, to construct our network skeleton for improving the robustness of the network. Furthermore, to improve the discriminative ability of the network and preserve more original semantic information of tiny vessels, we adopt BConvLSTM to integrate the feature maps captured from the first residual block and the last up-convolutional layer in a nonlinear manner. Results and discussion We conduct experiments on four public retinal blood vessel datasets, and the results show that the proposed BCR-UNet can preserve more tiny blood vessels at the low-contrast peripheral regions, even outperforming previous state-of-the-art methods.
Collapse
Affiliation(s)
- Yugen Yi
- School of Software, Jiangxi Normal University, Nanchang, China
| | - Changlu Guo
- Yichun Economic and Technological Development Zone, Yichun, China,*Correspondence: Changlu Guo
| | - Yangtao Hu
- The 908th Hospital of Chinese People's Liberation Army Joint Logistic Support Force, Nanchang, China,Yangtao Hu
| | - Wei Zhou
- College of Computer Science, Shenyang Aerospace University, Shenyang, China
| | - Wenle Wang
- School of Software, Jiangxi Normal University, Nanchang, China
| |
Collapse
|
38
|
Xu L, Zhu S, Wen N. Deep reinforcement learning and its applications in medical imaging and radiation therapy: a survey. Phys Med Biol 2022; 67. [PMID: 36270582 DOI: 10.1088/1361-6560/ac9cb3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 10/21/2022] [Indexed: 11/07/2022]
Abstract
Reinforcement learning takes sequential decision-making approaches by learning the policy through trial and error based on interaction with the environment. Combining deep learning and reinforcement learning can empower the agent to learn the interactions and the distribution of rewards from state-action pairs to achieve effective and efficient solutions in more complex and dynamic environments. Deep reinforcement learning (DRL) has demonstrated astonishing performance in surpassing the human-level performance in the game domain and many other simulated environments. This paper introduces the basics of reinforcement learning and reviews various categories of DRL algorithms and DRL models developed for medical image analysis and radiation treatment planning optimization. We will also discuss the current challenges of DRL and approaches proposed to make DRL more generalizable and robust in a real-world environment. DRL algorithms, by fostering the designs of the reward function, agents interactions and environment models, can resolve the challenges from scarce and heterogeneous annotated medical image data, which has been a major obstacle to implementing deep learning models in the clinic. DRL is an active research area with enormous potential to improve deep learning applications in medical imaging and radiation therapy planning.
Collapse
Affiliation(s)
- Lanyu Xu
- Department of Computer Science and Engineering, Oakland University, Rochester, MI, United States of America
| | - Simeng Zhu
- Department of Radiation Oncology, Henry Ford Health Systems, Detroit, MI, United States of America
| | - Ning Wen
- Department of Radiology/The Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, People's Republic of China.,The Global Institute of Future Technology, Shanghai Jiaotong University, Shanghai, People's Republic of China
| |
Collapse
|
39
|
Zhao W, Zhu R, Zhang J, Mao Y, Chen H, Ju W, Li M, Yang G, Gu K, Wang Z, Liu H, Shi J, Jiang X, Kojodjojo P, Chen M, Zhang F. Machine learning for distinguishing right from left premature ventricular contraction origin using surface electrocardiogram features. Heart Rhythm 2022; 19:1781-1789. [PMID: 35843464 DOI: 10.1016/j.hrthm.2022.07.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 06/30/2022] [Accepted: 07/11/2022] [Indexed: 12/24/2022]
Abstract
BACKGROUND Precise localization of the site of origin of premature ventricular contractions (PVCs) before ablation can facilitate the planning and execution of the electrophysiological procedure. OBJECTIVE The purpose of this study was to develop a predictive model that can be used to differentiate PVCs between the left ventricular outflow tract and right ventricular outflow tract (RVOT) using surface electrocardiogram characteristics. METHODS A total of 851 patients undergoing radiofrequency ablation of premature ventricular beats from January 2015 to March 2022 were enrolled. Ninety-two patients were excluded. The other 759 patients were enrolled into the development (n = 605), external validation (n = 104), or prospective cohort (n = 50). The development cohort consisted of the training group (n = 423) and the internal validation group (n = 182). Machine learning algorithms were used to construct predictive models for the origin of PVCs using body surface electrocardiogram features. RESULTS In the development cohort, the Random Forest model showed a maximum receiver operating characteristic curve area of 0.96. In the external validation cohort, the Random Forest model surpasses 4 reported algorithms in predicting performance (accuracy 94.23%; sensitivity 97.10%; specificity 88.57%). In the prospective cohort, the Random Forest model showed good performance (accuracy 94.00%; sensitivity 85.71%; specificity 97.22%). CONCLUSION Random Forest algorithm has improved the accuracy of distinguishing the origin of PVCs, which surpasses 4 previous standards, and would be used to identify the origin of PVCs before the interventional procedure.
Collapse
Affiliation(s)
- Wei Zhao
- Section of Pacing and Electrophysiology, Division of Cardiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Rui Zhu
- Section of Pacing and Electrophysiology, Division of Cardiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Jian Zhang
- Section of Pacing and Electrophysiology, Division of Cardiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Yangming Mao
- Section of Pacing and Electrophysiology, Division of Cardiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Hongwu Chen
- Section of Pacing and Electrophysiology, Division of Cardiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Weizhu Ju
- Section of Pacing and Electrophysiology, Division of Cardiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Mingfang Li
- Section of Pacing and Electrophysiology, Division of Cardiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Gang Yang
- Section of Pacing and Electrophysiology, Division of Cardiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Kai Gu
- Section of Pacing and Electrophysiology, Division of Cardiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Zidun Wang
- Section of Pacing and Electrophysiology, Division of Cardiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Hailei Liu
- Section of Pacing and Electrophysiology, Division of Cardiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Jiaojiao Shi
- Section of Pacing and Electrophysiology, Division of Cardiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Xiaohong Jiang
- Section of Pacing and Electrophysiology, Division of Cardiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Pipin Kojodjojo
- Department of Cardiology, National University Heart Centre, Singapore
| | - Minglong Chen
- Section of Pacing and Electrophysiology, Division of Cardiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Fengxiang Zhang
- Section of Pacing and Electrophysiology, Division of Cardiology, the First Affiliated Hospital with Nanjing Medical University, Nanjing, China.
| |
Collapse
|
40
|
Khandouzi A, Ariafar A, Mashayekhpour Z, Pazira M, Baleghi Y. Retinal Vessel Segmentation, a Review of Classic and Deep Methods. Ann Biomed Eng 2022; 50:1292-1314. [PMID: 36008569 DOI: 10.1007/s10439-022-03058-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 08/15/2022] [Indexed: 11/01/2022]
Abstract
Retinal illnesses such as diabetic retinopathy (DR) are the main causes of vision loss. In the early recognition of eye diseases, the segmentation of blood vessels in retina images plays an important role. Different symptoms of ocular diseases can be identified by the geometric features of ocular arteries. However, due to the complex construction of the blood vessels and their different thicknesses, segmenting the retina image is a challenging task. There are a number of algorithms that helped the detection of retinal diseases. This paper presents an overview of papers from 2016 to 2022 that discuss machine learning and deep learning methods for automatic vessel segmentation. The methods are divided into two groups: Deep learning-based, and classic methods. Algorithms, classifiers, pre-processing and specific techniques of each group is described, comprehensively. The performances of recent works are compared based on their achieved accuracy in different datasets in inclusive tables. A survey of most popular datasets like DRIVE, STARE, HRF and CHASE_DB1 is also given in this paper. Finally, a list of findings from this review is presented in the conclusion section.
Collapse
Affiliation(s)
- Ali Khandouzi
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Ali Ariafar
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Zahra Mashayekhpour
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Milad Pazira
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Yasser Baleghi
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran.
| |
Collapse
|
41
|
FIVES: A Fundus Image Dataset for Artificial Intelligence based Vessel Segmentation. Sci Data 2022; 9:475. [PMID: 35927290 PMCID: PMC9352679 DOI: 10.1038/s41597-022-01564-3] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 07/12/2022] [Indexed: 12/30/2022] Open
Abstract
Retinal vasculature provides an opportunity for direct observation of vessel morphology, which is linked to multiple clinical conditions. However, objective and quantitative interpretation of the retinal vasculature relies on precise vessel segmentation, which is time consuming and labor intensive. Artificial intelligence (AI) has demonstrated great promise in retinal vessel segmentation. The development and evaluation of AI-based models require large numbers of annotated retinal images. However, the public datasets that are usable for this task are scarce. In this paper, we collected a color fundus image vessel segmentation (FIVES) dataset. The FIVES dataset consists of 800 high-resolution multi-disease color fundus photographs with pixelwise manual annotation. The annotation process was standardized through crowdsourcing among medical experts. The quality of each image was also evaluated. To the best of our knowledge, this is the largest retinal vessel segmentation dataset for which we believe this work will be beneficial to the further development of retinal vessel segmentation.
Collapse
|
42
|
Fractal dimension of retinal vasculature as an image quality metric for automated fundus image analysis systems. Sci Rep 2022; 12:11868. [PMID: 35831401 PMCID: PMC9279448 DOI: 10.1038/s41598-022-16089-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 07/04/2022] [Indexed: 11/21/2022] Open
Abstract
Automated fundus screening is becoming a significant programme of telemedicine in ophthalmology. Instant quality evaluation of uploaded retinal images could decrease unreliable diagnosis. In this work, we propose fractal dimension of retinal vasculature as an easy, effective and explainable indicator of retinal image quality. The pipeline of our approach is as follows: utilize image pre-processing technique to standardize input retinal images from possibly different sources to a uniform style; then, an improved deep learning empowered vessel segmentation model is employed to extract retinal vessels from the pre-processed images; finally, a box counting module is used to measure the fractal dimension of segmented vessel images. A small fractal threshold (could be a value between 1.45 and 1.50) indicates insufficient image quality. Our approach has been validated on 30,644 images from four public database.
Collapse
|
43
|
Abstract
Topological and geometrical analysis of retinal blood vessels could be a cost-effective way to detect various common diseases. Automated vessel segmentation and vascular tree analysis models require powerful generalization capability in clinical applications. In this work, we constructed a novel benchmark RETA with 81 labelled vessel masks aiming to facilitate retinal vessel analysis. A semi-automated coarse-to-fine workflow was proposed for vessel annotation task. During database construction, we strived to control inter-annotator and intra-annotator variability by means of multi-stage annotation and label disambiguation on self-developed dedicated software. In addition to binary vessel masks, we obtained other types of annotations including artery/vein masks, vascular skeletons, bifurcations, trees and abnormalities. Subjective and objective quality validations of the annotated vessel masks demonstrated significantly improved quality over the existing open datasets. Our annotation software is also made publicly available serving the purpose of pixel-level vessel visualization. Researchers could develop vessel segmentation algorithms and evaluate segmentation performance using RETA. Moreover, it might promote the study of cross-modality tubular structure segmentation and analysis.
Collapse
|
44
|
MCPANet: Multiscale Cross-Position Attention Network for Retinal Vessel Image Segmentation. Symmetry (Basel) 2022. [DOI: 10.3390/sym14071357] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/10/2022] Open
Abstract
Accurate medical imaging segmentation of the retinal fundus vasculature is essential to assist physicians in diagnosis and treatment. In recent years, convolutional neural networks (CNN) are widely used to classify retinal blood vessel pixels for retinal blood vessel segmentation tasks. However, the convolutional block receptive field is limited, simple multiple superpositions tend to cause information loss, and there are limitations in feature extraction as well as vessel segmentation. To address these problems, this paper proposes a new retinal vessel segmentation network based on U-Net, which is called multi-scale cross-position attention network (MCPANet). MCPANet uses multiple scales of input to compensate for image detail information and applies to skip connections between encoding blocks and decoding blocks to ensure information transfer while effectively reducing noise. We propose a cross-position attention module to link the positional relationships between pixels and obtain global contextual information, which enables the model to segment not only the fine capillaries but also clear vessel edges. At the same time, multiple scale pooling operations are used to expand the receptive field and enhance feature extraction. It further reduces pixel classification errors and eases the segmentation difficulty caused by the asymmetry of fundus blood vessel distribution. We trained and validated our proposed model on three publicly available datasets, DRIVE, CHASE, and STARE, which obtained segmentation accuracy of 97.05%, 97.58%, and 97.68%, and Dice of 83.15%, 81.48%, and 85.05%, respectively. The results demonstrate that the proposed method in this paper achieves better results in terms of performance and segmentation results when compared with existing methods.
Collapse
|
45
|
Ye Y, Pan C, Wu Y, Wang S, Xia Y. MFI-Net: Multiscale Feature Interaction Network for Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2022; 26:4551-4562. [PMID: 35696471 DOI: 10.1109/jbhi.2022.3182471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Segmentation of retinal vessels on fundus images plays a critical role in the diagnosis of micro-vascular and ophthalmological diseases. Although being extensively studied, this task remains challenging due to many factors including the highly variable vessel width and poor vessel-background contrast. In this paper, we propose a multiscale feature interaction network (MFI-Net) for retinal vessel segmentation, which is a U-shaped convolutional neural network equipped with the pyramid squeeze-and-excitation (PSE) module, coarse-to-fine (C2F) module, deep supervision, and feature fusion. We extend the SE operator to multiscale features, resulting in the PSE module, which uses the channel attention learned at multiple scales to enhance multiscale features and enables the network to handle the vessels with variable width. We further design the C2F module to generate and re-process the residual feature maps, aiming to preserve more vessel details during the decoding process. The proposed MFI-Net has been evaluated against several public models on the DRIVE, STARE, CHASE_DB1, and HRF datasets. Our results suggest that both PSE and C2F modules are effective in improving the accuracy of MFI-Net, and also indicate that our model has superior segmentation performance and generalization ability over existing models on four public datasets.
Collapse
|
46
|
Stergar J, Lakota K, Perše M, Tomšič M, Milanič M. Hyperspectral evaluation of vasculature in induced peritonitis mouse models. BIOMEDICAL OPTICS EXPRESS 2022; 13:3461-3475. [PMID: 35781958 PMCID: PMC9208583 DOI: 10.1364/boe.460288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 04/28/2022] [Accepted: 05/08/2022] [Indexed: 06/15/2023]
Abstract
Imaging of blood vessel structure in combination with functional information about blood oxygenation can be important in characterizing many different health conditions in which the growth of new vessels contributes to the overall condition. In this paper, we present a method for extracting comprehensive maps of the vasculature from hyperspectral images that include tissue and vascular oxygenation. We also show results from a preclinical study of peritonitis in mice. First, we analyze hyperspectral images using Beer-Lambert exponential attenuation law to obtain maps of hemoglobin species throughout the sample. We then use an automatic segmentation algorithm to extract blood vessels from the hemoglobin map and combine them into a vascular structure-oxygenation map. We apply this methodology to a series of hyperspectral images of the abdominal wall of mice with and without induced peritonitis. Peritonitis is an inflammation of peritoneum that leads, if untreated, to complications such as peritoneal sclerosis and even death. Characteristic inflammatory response can also be accompanied by changes in vasculature, such as neoangiogenesis. We demonstrate a potential application of the proposed segmentation and processing method by introducing an abnormal tissue fraction metric that quantifies the amount of tissue that deviates from the average values of healthy controls. It is shown that the proposed metric successfully discriminates between healthy control subjects and model subjects with induced peritonitis and has a high statistical significance.
Collapse
Affiliation(s)
- Jošt Stergar
- J. Stefan Institute, Jamova cesta 39, 1000 Ljubljana, Slovenia
- Faculty of Mathematics and Physics, University of Ljubljana, Jadranska ulica 19, 1000 Ljubljana, Slovenia
| | - Katja Lakota
- FAMNIT, University of Primorska, Glagoljaska 8, 6000 Koper, Slovenia
- University Medical Centre, Department of Rheumatology, Vodnikova ulica 62, 1000 Ljubljana, Slovenia
| | - Martina Perše
- Faculty of Medicine,University of Ljubljana, Vrazov trg 2, 1000 Ljubljana, Slovenia
| | - Matija Tomšič
- University Medical Centre, Department of Rheumatology, Vodnikova ulica 62, 1000 Ljubljana, Slovenia
- Faculty of Medicine,University of Ljubljana, Vrazov trg 2, 1000 Ljubljana, Slovenia
| | - Matija Milanič
- J. Stefan Institute, Jamova cesta 39, 1000 Ljubljana, Slovenia
- Faculty of Mathematics and Physics, University of Ljubljana, Jadranska ulica 19, 1000 Ljubljana, Slovenia
| |
Collapse
|
47
|
Li M, Bai H, Zhang F, Zhou Y, Lin Q, Zhou Q, Feng Q, Zhang L. Automatic segmentation model of intercondylar fossa based on deep learning: a novel and effective assessment method for the notch volume. BMC Musculoskelet Disord 2022; 23:426. [PMID: 35524293 PMCID: PMC9074347 DOI: 10.1186/s12891-022-05378-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 04/28/2022] [Indexed: 11/10/2022] Open
Abstract
Background Notch volume is associated with anterior cruciate ligament (ACL) injury. Manual tracking of intercondylar notch on MR images is time-consuming and laborious. Deep learning has become a powerful tool for processing medical images. This study aims to develop an MRI segmentation model of intercondylar fossa based on deep learning to automatically measure notch volume, and explore its correlation with ACL injury. Methods The MRI data of 363 subjects (311 males and 52 females) with ACL injuries incurred during non-contact sports and 232 subjects (147 males and 85 females) with intact ACL were retrospectively analyzed. Each layer of intercondylar fossa was manually traced by radiologists on axial MR images. Notch volume was then calculated. We constructed an automatic segmentation system based on the architecture of Res-UNet for intercondylar fossa and used dice similarity coefficient (DSC) to compare the performance of segmentation systems by different networks. Unpaired t-test was performed to determine differences in notch volume between ACL-injured and intact groups, and between males and females. Results The DSCs of intercondylar fossa based on different networks were all more than 0.90, and Res-UNet showed the best performance. The notch volume was significantly lower in the ACL-injured group than in the control group (6.12 ± 1.34 cm3 vs. 6.95 ± 1.75 cm3, P < 0.001). Females had lower notch volume than males (5.41 ± 1.30 cm3 vs. 6.76 ± 1.51 cm3, P < 0.001). Males and females who had ACL injuries had smaller notch than those with intact ACL (p < 0.001 and p < 0.005). Men had larger notches than women, regardless of the ACL injuries (p < 0.001). Conclusion Using a deep neural network to segment intercondylar fossa automatically provides a technical support for the clinical prediction and prevention of ACL injury and re-injury after surgery.
Collapse
Affiliation(s)
- Mifang Li
- Southern Medical University, 1838 shatai Road, Baiyun District, Guangzhou, 510515, Guangdong province, China.,Department of Medical Imaging, Longgang Central Hospital of Shenzhen, 6082 Longgang Avenue, Longgang District, Shenzhen, 518116, Guangdong province, China.,Department of Medical Imaging, The Third Affiliated Hospital, Southern Medical University, 183 Zhongshan Avenue West, Tianhe District, Guangzhou, 510630, Guangdong province, China
| | - Hanhua Bai
- Southern Medical University, 1838 shatai Road, Baiyun District, Guangzhou, 510515, Guangdong province, China.,Department of Biomedical Engineering, Southern Medical University, 1838 shatai Road, Baiyun District, Guangzhou, 510515, Guangdong province, China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1838 shatai Road, Baiyun District, Guangzhou, 510515, Guangdong province, China.,Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1838 shatai Road, Baiyun District, Guangzhou, 510515, Guangdong province, China
| | - Feiyuan Zhang
- Department of Medical Imaging, The Third Affiliated Hospital, Southern Medical University, 183 Zhongshan Avenue West, Tianhe District, Guangzhou, 510630, Guangdong province, China
| | - Yujia Zhou
- Department of Biomedical Engineering, Southern Medical University, 1838 shatai Road, Baiyun District, Guangzhou, 510515, Guangdong province, China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1838 shatai Road, Baiyun District, Guangzhou, 510515, Guangdong province, China.,Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1838 shatai Road, Baiyun District, Guangzhou, 510515, Guangdong province, China
| | - Qiuyu Lin
- Department of Medical Imaging, The Third Affiliated Hospital, Southern Medical University, 183 Zhongshan Avenue West, Tianhe District, Guangzhou, 510630, Guangdong province, China
| | - Quan Zhou
- Department of Medical Imaging, The Third Affiliated Hospital, Southern Medical University, 183 Zhongshan Avenue West, Tianhe District, Guangzhou, 510630, Guangdong province, China.
| | - Qianjin Feng
- Southern Medical University, 1838 shatai Road, Baiyun District, Guangzhou, 510515, Guangdong province, China. .,Department of Biomedical Engineering, Southern Medical University, 1838 shatai Road, Baiyun District, Guangzhou, 510515, Guangdong province, China. .,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1838 shatai Road, Baiyun District, Guangzhou, 510515, Guangdong province, China. .,Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1838 shatai Road, Baiyun District, Guangzhou, 510515, Guangdong province, China.
| | - Lingyan Zhang
- Southern Medical University, 1838 shatai Road, Baiyun District, Guangzhou, 510515, Guangdong province, China. .,Department of Medical Imaging, Longgang Central Hospital of Shenzhen, 6082 Longgang Avenue, Longgang District, Shenzhen, 518116, Guangdong province, China.
| |
Collapse
|
48
|
Hofer D, Schmidt-Erfurth U, Orlando JI, Goldbach F, Gerendas BS, Seeböck P. Improving foveal avascular zone segmentation in fluorescein angiograms by leveraging manual vessel labels from public color fundus pictures. BIOMEDICAL OPTICS EXPRESS 2022; 13:2566-2580. [PMID: 35774310 PMCID: PMC9203117 DOI: 10.1364/boe.452873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 03/11/2022] [Accepted: 03/24/2022] [Indexed: 06/15/2023]
Abstract
In clinical routine, ophthalmologists frequently analyze the shape and size of the foveal avascular zone (FAZ) to detect and monitor retinal diseases. In order to extract those parameters, the contours of the FAZ need to be segmented, which is normally achieved by analyzing the retinal vasculature (RV) around the macula in fluorescein angiograms (FA). Computer-aided segmentation methods based on deep learning (DL) can automate this task. However, current approaches for segmenting the FAZ are often tailored to a specific dataset or require manual initialization. Furthermore, they do not take the variability and challenges of clinical FA into account, which are often of low quality and difficult to analyze. In this paper we propose a DL-based framework to automatically segment the FAZ in challenging FA scans from clinical routine. Our approach mimics the workflow of retinal experts by using additional RV labels as a guidance during training. Hence, our model is able to produce RV segmentations simultaneously. We minimize the annotation work by using a multi-modal approach that leverages already available public datasets of color fundus pictures (CFPs) and their respective manual RV labels. Our experimental evaluation on two datasets with FA from 1) clinical routine and 2) large multicenter clinical trials shows that the addition of weak RV labels as a guidance during training improves the FAZ segmentation significantly with respect to using only manual FAZ annotations.
Collapse
Affiliation(s)
- Dominik Hofer
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - José Ignacio Orlando
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
- Yatiris Group, PLADEMA Institute, CON-ICET, Universidad Nacional del Centro de la Provincia de Buenos Aires, Gral. Pinto 399, Tandil, Buenos Aires, Argentina
| | - Felix Goldbach
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - Bianca S. Gerendas
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - Philipp Seeböck
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| |
Collapse
|
49
|
State-of-the-art retinal vessel segmentation with minimalistic models. Sci Rep 2022; 12:6174. [PMID: 35418576 PMCID: PMC9007957 DOI: 10.1038/s41598-022-09675-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 03/10/2022] [Indexed: 01/03/2023] Open
Abstract
The segmentation of retinal vasculature from eye fundus images is a fundamental task in retinal image analysis. Over recent years, increasingly complex approaches based on sophisticated Convolutional Neural Network architectures have been pushing performance on well-established benchmark datasets. In this paper, we take a step back and analyze the real need of such complexity. We first compile and review the performance of 20 different techniques on some popular databases, and we demonstrate that a minimalistic version of a standard U-Net with several orders of magnitude less parameters, carefully trained and rigorously evaluated, closely approximates the performance of current best techniques. We then show that a cascaded extension (W-Net) reaches outstanding performance on several popular datasets, still using orders of magnitude less learnable weights than any previously published work. Furthermore, we provide the most comprehensive cross-dataset performance analysis to date, involving up to 10 different databases. Our analysis demonstrates that the retinal vessel segmentation is far from solved when considering test images that differ substantially from the training data, and that this task represents an ideal scenario for the exploration of domain adaptation techniques. In this context, we experiment with a simple self-labeling strategy that enables moderate enhancement of cross-dataset performance, indicating that there is still much room for improvement in this area. Finally, we test our approach on Artery/Vein and vessel segmentation from OCTA imaging problems, where we again achieve results well-aligned with the state-of-the-art, at a fraction of the model complexity available in recent literature. Code to reproduce the results in this paper is released.
Collapse
|
50
|
Xu J, Shen J, Wan C, Jiang Q, Yan Z, Yang W. A Few-Shot Learning-Based Retinal Vessel Segmentation Method for Assisting in the Central Serous Chorioretinopathy Laser Surgery. Front Med (Lausanne) 2022; 9:821565. [PMID: 35308538 PMCID: PMC8927682 DOI: 10.3389/fmed.2022.821565] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 01/28/2022] [Indexed: 12/05/2022] Open
Abstract
Background The location of retinal vessels is an important prerequisite for Central Serous Chorioretinopathy (CSC) Laser Surgery, which does not only assist the ophthalmologist in marking the location of the leakage point (LP) on the fundus color image but also avoids the damage of the laser spot to the vessel tissue, as well as the low efficiency of the surgery caused by the absorption of laser energy by retinal vessels. In acquiring an excellent intra- and cross-domain adaptability, the existing deep learning (DL)-based vessel segmentation scheme must be driven by big data, which makes the densely annotated work tedious and costly. Methods This paper aims to explore a new vessel segmentation method with a few samples and annotations to alleviate the above problems. Firstly, a key solution is presented to transform the vessel segmentation scene into the few-shot learning task, which lays a foundation for the vessel segmentation task with a few samples and annotations. Then, we improve the existing few-shot learning framework as our baseline model to adapt to the vessel segmentation scenario. Next, the baseline model is upgraded from the following three aspects: (1) A multi-scale class prototype extraction technique is designed to obtain more sufficient vessel features for better utilizing the information from the support images; (2) The multi-scale vessel features of the query images, inferred by the support image class prototype information, are gradually fused to provide more effective guidance for the vessel extraction tasks; and (3) A multi-scale attention module is proposed to promote the consideration of the global information in the upgraded model to assist vessel localization. Concurrently, the integrated framework is further conceived to appropriately alleviate the low performance of a single model in the cross-domain vessel segmentation scene, enabling to boost the domain adaptabilities of both the baseline and the upgraded models. Results Extensive experiments showed that the upgraded operation could further improve the performance of vessel segmentation significantly. Compared with the listed methods, both the baseline and the upgraded models achieved competitive results on the three public retinal image datasets (i.e., CHASE_DB, DRIVE, and STARE). In the practical application of private CSC datasets, the integrated scheme partially enhanced the domain adaptabilities of the two proposed models.
Collapse
Affiliation(s)
- Jianguo Xu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jianxin Shen
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Cheng Wan
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Zhipeng Yan
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Weihua Yang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|