51
|
Pan Y, Liu J, Cai Y, Yang X, Zhang Z, Long H, Zhao K, Yu X, Zeng C, Duan J, Xiao P, Li J, Cai F, Yang X, Tan Z. Fundus image classification using Inception V3 and ResNet-50 for the early diagnostics of fundus diseases. Front Physiol 2023; 14:1126780. [PMID: 36875027 PMCID: PMC9975334 DOI: 10.3389/fphys.2023.1126780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 01/27/2023] [Indexed: 02/17/2023] Open
Abstract
Purpose: We aim to present effective and computer aided diagnostics in the field of ophthalmology and improve eye health. This study aims to create an automated deep learning based system for categorizing fundus images into three classes: normal, macular degeneration and tessellated fundus for the timely recognition and treatment of diabetic retinopathy and other diseases. Methods: A total of 1,032 fundus images were collected from 516 patients using fundus camera from Health Management Center, Shenzhen University General Hospital Shenzhen University, Shenzhen 518055, Guangdong, China. Then, Inception V3 and ResNet-50 deep learning models are used to classify fundus images into three classes, Normal, Macular degeneration and tessellated fundus for the timely recognition and treatment of fundus diseases. Results: The experimental results show that the effect of model recognition is the best when the Adam is used as optimizer method, the number of iterations is 150, and 0.00 as the learning rate. According to our proposed approach we, achieved the highest accuracy of 93.81% and 91.76% by using ResNet-50 and Inception V3 after fine-tuned and adjusted hyper parameters according to our classification problem. Conclusion: Our research provides a reference to the clinical diagnosis or screening for diabetic retinopathy and other eye diseases. Our suggested computer aided diagnostics framework will prevent incorrect diagnoses caused by the low image quality and individual experience, and other factors. In future implementations, the ophthalmologists can implement more advanced learning algorithms to improve the accuracy of diagnosis.
Collapse
Affiliation(s)
- Yuhang Pan
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Junru Liu
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Yuting Cai
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Xuemei Yang
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Zhucheng Zhang
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Hong Long
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Ketong Zhao
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Xia Yu
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Cui Zeng
- General Practice Alliance, Shenzhen, Guangdong, China.,University Town East Community Health Service Center, Shenzhen, Guangdong, China
| | - Jueni Duan
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Ping Xiao
- Department of Otorhinolaryngology Head and Neck Surgery, Shenzhen Children's Hospital, Shenzhen, Guangdong, China
| | - Jingbo Li
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China
| | - Feiyue Cai
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China.,General Practice Alliance, Shenzhen, Guangdong, China
| | - Xiaoyun Yang
- Ophthalmology Department, Shenzhen OCT Hospital, Shenzhen, Guangdong, China
| | - Zhen Tan
- Health Management Center, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen University, Shenzhen, Guangdong, China.,General Practice Alliance, Shenzhen, Guangdong, China
| |
Collapse
|
52
|
Liu Y, Shen J, Yang L, Bian G, Yu H. ResDO-UNet: A deep residual network for accurate retinal vessel segmentation from fundus images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104087] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
53
|
Liu Y, Shen J, Yang L, Yu H, Bian G. Wave-Net: A lightweight deep network for retinal vessel segmentation from fundus images. Comput Biol Med 2023; 152:106341. [PMID: 36463794 DOI: 10.1016/j.compbiomed.2022.106341] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Revised: 10/25/2022] [Accepted: 11/16/2022] [Indexed: 11/26/2022]
Abstract
Accurate segmentation of retinal vessels from fundus images is fundamental for the diagnosis of numerous diseases of eye, and an automated vessel segmentation method can effectively help clinicians to make accurate diagnosis for the patients and provide the appropriate treatment schemes. It is important to note that both thick and thin vessels play the key role for disease judgements. Because of complex factors, the precise segmentation of thin vessels is still a great challenge, such as the presence of various lesions, image noise, complex backgrounds and poor contrast in the fundus images. Recently, because of the advantage of context feature representation learning capabilities, deep learning has shown a remarkable segmentation performance on retinal vessels. However, it still has some shortcomings on high-precision retinal vessel extraction due to some factors, such as semantic information loss caused by pooling operations, limited receptive field, etc. To address these problems, this paper proposes a new lightweight segmentation network for precise retinal vessel segmentation, which is called as Wave-Net model on account of the whole shape. To alleviate the influence of semantic information loss problem to thin vessels, to acquire more contexts about micro structures and details, a detail enhancement and denoising block (DED) is proposed to improve the segmentation precision on thin vessels, which replaces the simple skip connections of original U-Net. On the other hand, it could well alleviate the influence of the semantic gap problem. Further, faced with limited receptive field, for multi-scale vessel detection, a multi-scale feature fusion block (MFF) is proposed to fuse cross-scale contexts to achieve higher segmentation accuracy and realize effective processing of local feature maps. Experiments indicate that proposed Wave-Net achieves an excellent performance on retinal vessel segmentation while maintaining a lightweight network design compared to other advanced segmentation methods, and it also has shown a better segmentation ability to thin vessels.
Collapse
Affiliation(s)
- Yanhong Liu
- School of Electrical and Information Engineering, Zhengzhou University, 450001, China; Robot Perception and Control Engineering Laboratory, Henan Province, 450001, China
| | - Ji Shen
- School of Electrical and Information Engineering, Zhengzhou University, 450001, China; Robot Perception and Control Engineering Laboratory, Henan Province, 450001, China
| | - Lei Yang
- School of Electrical and Information Engineering, Zhengzhou University, 450001, China; Robot Perception and Control Engineering Laboratory, Henan Province, 450001, China.
| | - Hongnian Yu
- School of Electrical and Information Engineering, Zhengzhou University, 450001, China; The Built Environment, Edinburgh Napier University, Edinburgh EH10 5DT, UK
| | - Guibin Bian
- School of Electrical and Information Engineering, Zhengzhou University, 450001, China; The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| |
Collapse
|
54
|
Multi-layer segmentation of retina OCT images via advanced U-net architecture. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2022.10.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
55
|
MF2ResU-Net: a multi-feature fusion deep learning architecture for retinal blood vessel segmentation. DIGITAL CHINESE MEDICINE 2022. [DOI: 10.1016/j.dcmed.2022.12.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023] Open
|
56
|
Kumar KS, Singh NP. An efficient registration-based approach for retinal blood vessel segmentation using generalized Pareto and fatigue pdf. Med Eng Phys 2022; 110:103936. [PMID: 36529622 DOI: 10.1016/j.medengphy.2022.103936] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 11/05/2022] [Accepted: 12/01/2022] [Indexed: 12/12/2022]
Abstract
Segmentation of Retinal Blood Vessel (RBV) extraction in the retina images and Registration of segmented RBV structure is implemented to identify changes in vessel structure by ophthalmologists in diagnosis of various illnesses like Glaucoma, Diabetes, and Hypertension's. The Retinal Blood Vessel provides blood to the inner retinal neurons, RBV are located mainly in internal retina but it may partly in the ganglion cell layer, following network failure haven't been identified with past methods. Classifications of accurate RBV and Registration of segmented blood vessels are challenging tasks in the low intensity background of Retinal Image. So, we projected a novel approach of segmentation of RBV extraction used matched filter of Generalized Pareto Probability Distribution Function (pdf) and Registration approach on feature-based segmented retinal blood vessel of Binary Robust Invariant Scalable Key point (BRISK). The BRISK provides the predefined sampling pattern as compared to Pdf. The BRISK feature is implemented for attention point recognition & matching approach for change in vessel structure. The proposed approaches contain 3 levels: pre-processing, matched filter-based Generalized Pareto pdf as a source along with the novel approach of fatigue pdf as a target, and BRISK framework is used for Registration on segmented retinal images of supply & intention images. This implemented system's performance is estimated in experimental analysis by the Average accuracy, Normalized Cross-Correlation (NCC), and computation time process of the segmented retinal source and target image. The NCC is main element to give more statistical information about retinal image segmentation. The proposed approach of Generalized Pareto value pdf has Average Accuracy of 95.21%, NCC of both image pairs is 93%, and Average accuracy of Registration of segmented source images and the target image is 98.51% respectively. The proposed approach of average computational time taken is around 1.4 s, which has been identified on boundary condition of Pdf function.
Collapse
Affiliation(s)
- K Susheel Kumar
- GITAM University, Bengaluru, 561203, India; National Institute of Technology Hamirpur, Himachal Pradesh 177005, India.
| | | |
Collapse
|
57
|
Iqbal S, Khan TM, Naveed K, Naqvi SS, Nawaz SJ. Recent trends and advances in fundus image analysis: A review. Comput Biol Med 2022; 151:106277. [PMID: 36370579 DOI: 10.1016/j.compbiomed.2022.106277] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/19/2022] [Accepted: 10/30/2022] [Indexed: 11/05/2022]
Abstract
Automated retinal image analysis holds prime significance in the accurate diagnosis of various critical eye diseases that include diabetic retinopathy (DR), age-related macular degeneration (AMD), atherosclerosis, and glaucoma. Manual diagnosis of retinal diseases by ophthalmologists takes time, effort, and financial resources, and is prone to error, in comparison to computer-aided diagnosis systems. In this context, robust classification and segmentation of retinal images are primary operations that aid clinicians in the early screening of patients to ensure the prevention and/or treatment of these diseases. This paper conducts an extensive review of the state-of-the-art methods for the detection and segmentation of retinal image features. Existing notable techniques for the detection of retinal features are categorized into essential groups and compared in depth. Additionally, a summary of quantifiable performance measures for various important stages of retinal image analysis, such as image acquisition and preprocessing, is provided. Finally, the widely used in the literature datasets for analyzing retinal images are described and their significance is emphasized.
Collapse
Affiliation(s)
- Shahzaib Iqbal
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Tariq M Khan
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia.
| | - Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan; Department of Electrical and Computer Engineering, Aarhus University, Aarhus, Denmark
| | - Syed S Naqvi
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| | - Syed Junaid Nawaz
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad, Pakistan
| |
Collapse
|
58
|
Tang W, Deng H, Yin S. CPMF-Net: Multi-Feature Network Based on Collaborative Patches for Retinal Vessel Segmentation. SENSORS (BASEL, SWITZERLAND) 2022; 22:9210. [PMID: 36501911 PMCID: PMC9736046 DOI: 10.3390/s22239210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 11/23/2022] [Accepted: 11/24/2022] [Indexed: 06/17/2023]
Abstract
As an important basis of clinical diagnosis, the morphology of retinal vessels is very useful for the early diagnosis of some eye diseases. In recent years, with the rapid development of deep learning technology, automatic segmentation methods based on it have made considerable progresses in the field of retinal blood vessel segmentation. However, due to the complexity of vessel structure and the poor quality of some images, retinal vessel segmentation, especially the segmentation of Capillaries, is still a challenging task. In this work, we propose a new retinal blood vessel segmentation method, called multi-feature segmentation, based on collaborative patches. First, we design a new collaborative patch training method which effectively compensates for the pixel information loss in the patch extraction through information transmission between collaborative patches. Additionally, the collaborative patch training strategy can simultaneously have the characteristics of low occupancy, easy structure and high accuracy. Then, we design a multi-feature network to gather a variety of information features. The hierarchical network structure, together with the integration of the adaptive coordinate attention module and the gated self-attention module, enables these rich information features to be used for segmentation. Finally, we evaluate the proposed method on two public datasets, namely DRIVE and STARE, and compare the results of our method with those of other nine advanced methods. The results show that our method outperforms other existing methods.
Collapse
|
59
|
RADCU-Net: residual attention and dual-supervision cascaded U-Net for retinal blood vessel segmentation. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-022-01715-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
60
|
Liu R, Gao S, Zhang H, Wang S, Zhou L, Liu J. MTNet: A combined diagnosis algorithm of vessel segmentation and diabetic retinopathy for retinal images. PLoS One 2022; 17:e0278126. [PMID: 36417405 PMCID: PMC9683560 DOI: 10.1371/journal.pone.0278126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 11/09/2022] [Indexed: 11/26/2022] Open
Abstract
Medical studies have shown that the condition of human retinal vessels may reveal the physiological structure of the relationship between age-related macular degeneration, glaucoma, atherosclerosis, cataracts, diabetic retinopathy, and other ophthalmic diseases and systemic diseases, and their abnormal changes often serve as a diagnostic basis for the severity of the condition. In this paper, we design and implement a deep learning-based algorithm for automatic segmentation of retinal vessel (CSP_UNet). It mainly adopts a U-shaped structure composed of an encoder and a decoder and utilizes a cross-stage local connectivity mechanism, attention mechanism, and multi-scale fusion, which can obtain better segmentation results with limited data set capacity. The experimental results show that compared with several existing classical algorithms, the proposed algorithm has the highest blood vessel intersection ratio on the dataset composed of four retinal fundus images, reaching 0.6674. Then, based on the CSP_UNet and introducing hard parameter sharing in multi-task learning, we innovatively propose a combined diagnosis algorithm vessel segmentation and diabetic retinopathy for retinal images (MTNet). The experiments show that the diagnostic accuracy of the MTNet algorithm is higher than that of the single task, with 0.4% higher vessel segmentation IoU and 5.2% higher diagnostic accuracy of diabetic retinopathy classification.
Collapse
Affiliation(s)
- Ruochen Liu
- Key Laboratory of Earth Exploration and Infomation Techniques (Chengdu University of Technology), Ministry of Education, Chengdu, China
- The College of Mechanical and Electrical Engineering, Chengdu University of Technology, Chengdu, China
| | - Song Gao
- Key Laboratory of Earth Exploration and Infomation Techniques (Chengdu University of Technology), Ministry of Education, Chengdu, China
- The College of Mechanical and Electrical Engineering, Chengdu University of Technology, Chengdu, China
- * E-mail:
| | - Hengsheng Zhang
- Key Laboratory of Earth Exploration and Infomation Techniques (Chengdu University of Technology), Ministry of Education, Chengdu, China
- The College of Mechanical and Electrical Engineering, Chengdu University of Technology, Chengdu, China
| | - Simin Wang
- Key Laboratory of Earth Exploration and Infomation Techniques (Chengdu University of Technology), Ministry of Education, Chengdu, China
- The College of Mechanical and Electrical Engineering, Chengdu University of Technology, Chengdu, China
| | - Lun Zhou
- Key Laboratory of Earth Exploration and Infomation Techniques (Chengdu University of Technology), Ministry of Education, Chengdu, China
- The College of Mechanical and Electrical Engineering, Chengdu University of Technology, Chengdu, China
| | - Jiaming Liu
- Key Laboratory of Earth Exploration and Infomation Techniques (Chengdu University of Technology), Ministry of Education, Chengdu, China
- The College of Mechanical and Electrical Engineering, Chengdu University of Technology, Chengdu, China
| |
Collapse
|
61
|
Elaouaber Z, Feroui A, Lazouni M, Messadi M. Blood vessel segmentation using deep learning architectures for aid diagnosis of diabetic retinopathy. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2145999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Affiliation(s)
- Z.A. Elaouaber
- Biomedical engineering, Universite Abou Bekr Belkaid Tlemcen Faculte de Technologie, Algeria, Tlemcen
| | - A. Feroui
- Biomedical engineering, Universite Abou Bekr Belkaid Tlemcen Faculte de Technologie, Algeria, Tlemcen
| | - M.E.A. Lazouni
- Biomedical engineering, Universite Abou Bekr Belkaid Tlemcen Faculte de Technologie, Algeria, Tlemcen
| | - M. Messadi
- Biomedical engineering, Universite Abou Bekr Belkaid Tlemcen Faculte de Technologie, Algeria, Tlemcen
| |
Collapse
|
62
|
Hassan D, Gill HM, Happe M, Bhatwadekar AD, Hajrasouliha AR, Janga SC. Combining transfer learning with retinal lesion features for accurate detection of diabetic retinopathy. Front Med (Lausanne) 2022; 9:1050436. [PMID: 36425113 PMCID: PMC9681494 DOI: 10.3389/fmed.2022.1050436] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 10/24/2022] [Indexed: 11/11/2022] Open
Abstract
Diabetic retinopathy (DR) is a late microvascular complication of Diabetes Mellitus (DM) that could lead to permanent blindness in patients, without early detection. Although adequate management of DM via regular eye examination can preserve vision in in 98% of the DR cases, DR screening and diagnoses based on clinical lesion features devised by expert clinicians; are costly, time-consuming and not sufficiently accurate. This raises the requirements for Artificial Intelligent (AI) systems which can accurately detect DR automatically and thus preventing DR before affecting vision. Hence, such systems can help clinician experts in certain cases and aid ophthalmologists in rapid diagnoses. To address such requirements, several approaches have been proposed in the literature that use Machine Learning (ML) and Deep Learning (DL) techniques to develop such systems. However, these approaches ignore the highly valuable clinical lesion features that could contribute significantly to the accurate detection of DR. Therefore, in this study we introduce a framework called DR-detector that employs the Extreme Gradient Boosting (XGBoost) ML model trained via the combination of the features extracted by the pretrained convolutional neural networks commonly known as transfer learning (TL) models and the clinical retinal lesion features for accurate detection of DR. The retinal lesion features are extracted via image segmentation technique using the UNET DL model and captures exudates (EXs), microaneurysms (MAs), and hemorrhages (HEMs) that are relevant lesions for DR detection. The feature combination approach implemented in DR-detector has been applied to two common TL models in the literature namely VGG-16 and ResNet-50. We trained the DR-detector model using a training dataset comprising of 1,840 color fundus images collected from e-ophtha, retinal lesions and APTOS 2019 Kaggle datasets of which 920 images are healthy. To validate the DR-detector model, we test the model on external dataset that consists of 81 healthy images collected from High-Resolution Fundus (HRF) dataset and MESSIDOR-2 datasets and 81 images with DR signs collected from Indian Diabetic Retinopathy Image Dataset (IDRID) dataset annotated for DR by expert. The experimental results show that the DR-detector model achieves a testing accuracy of 100% in detecting DR after training it with the combination of ResNet-50 and lesion features and 99.38% accuracy after training it with the combination of VGG-16 and lesion features. More importantly, the results also show a higher contribution of specific lesion features toward the performance of the DR-detector model. For instance, using only the hemorrhages feature to train the model, our model achieves an accuracy of 99.38 in detecting DR, which is higher than the accuracy when training the model with the combination of all lesion features (89%) and equal to the accuracy when training the model with the combination of all lesions and VGG-16 features together. This highlights the possibility of using only the clinical features, such as lesions that are clinically interpretable, to build the next generation of robust artificial intelligence (AI) systems with great clinical interpretability for DR detection. The code of the DR-detector framework is available on GitHub at https://github.com/Janga-Lab/DR-detector and can be readily employed for detecting DR from retinal image datasets.
Collapse
Affiliation(s)
- Doaa Hassan
- Department of BioHealth Informatics, School of Informatics and Computing, Indiana University Purdue University, Indianapolis, IN, United States
- Computers and Systems Department, National Telecommunication Institute, Cairo, Egypt
| | - Hunter Mathias Gill
- Department of BioHealth Informatics, School of Informatics and Computing, Indiana University Purdue University, Indianapolis, IN, United States
| | - Michael Happe
- Department of Ophthalmology, Glick Eye Institute, Indiana University School of Medicine, Indianapolis, IN, United States
| | - Ashay D. Bhatwadekar
- Department of Ophthalmology, Glick Eye Institute, Indiana University School of Medicine, Indianapolis, IN, United States
| | - Amir R. Hajrasouliha
- Department of Ophthalmology, Glick Eye Institute, Indiana University School of Medicine, Indianapolis, IN, United States
| | - Sarath Chandra Janga
- Department of BioHealth Informatics, School of Informatics and Computing, Indiana University Purdue University, Indianapolis, IN, United States
- Department of Medical and Molecular Genetics, Indiana University School of Medicine, Medical Research and Library Building, Indianapolis, IN, United States
- Centre for Computational Biology and Bioinformatics, Indiana University School of Medicine, 5021 Health Information and Translational Sciences (HITS), Indianapolis, IN, United States
- *Correspondence: Sarath Chandra Janga
| |
Collapse
|
63
|
Liu L, Liu Y, Zhou J, Guo C, Duan H. A novel MCF-Net: Multi-level context fusion network for 2D medical image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107160. [PMID: 36191351 DOI: 10.1016/j.cmpb.2022.107160] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 08/14/2022] [Accepted: 09/25/2022] [Indexed: 06/16/2023]
Abstract
Medical image segmentation is a crucial step in the clinical applications for diagnosis and analysis of some diseases. U-Net-based convolution neural networks have achieved impressive performance in medical image segmentation tasks. However, the multi-level contextual information integration capability and the feature extraction ability are often insufficient. In this paper, we present a novel multi-level context fusion network (MCF-Net) to improve the performance of U-Net on various segmentation tasks by designing three modules, hybrid attention-based residual atrous convolution (HARA) module, multi-scale feature memory (MSFM) module, and multi-receptive field fusion (MRFF) module, to fuse multi-scale contextual information. HARA module was proposed to effectively extract multi-receptive field features by combing atrous spatial pyramid pooling and attention mechanism. We further design the MSFM and MRFF modules to fuse features of different levels and effectively extract contextual information. The proposed MCF-Net was evaluated on the ISIC 2018, DRIVE, BUSI, and Kvasir-SEG datasets, which have challenging images of many sizes and widely varying anatomy. The experimental results show that MCF-Net is very competitive with other U-Net models, and it offers tremendous potential as a general-purpose deep learning model for 2D medical image segmentation.
Collapse
Affiliation(s)
- Lizhu Liu
- Engineering Research Center of Automotive Electrics and Control Technology, College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, China; National Engineering Laboratory of Robot Visual Perception and Control Technology, School of Robotics, Hunan University, Changsha 410082, China.
| | - Yexin Liu
- Engineering Research Center of Automotive Electrics and Control Technology, College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, China.
| | - Jian Zhou
- Engineering Research Center of Automotive Electrics and Control Technology, College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, China.
| | - Cheng Guo
- Engineering Research Center of Automotive Electrics and Control Technology, College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, China.
| | - Huigao Duan
- Engineering Research Center of Automotive Electrics and Control Technology, College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, China.
| |
Collapse
|
64
|
Ni J, Liu J, Li X, Chen Z. SFA-Net: Scale and Feature Aggregate Network for Retinal Vessel Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:4695136. [PMID: 36312595 PMCID: PMC9616669 DOI: 10.1155/2022/4695136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 09/22/2022] [Accepted: 10/02/2022] [Indexed: 11/18/2022]
Abstract
A U-Net-based network has achieved competitive performance in retinal vessel segmentation. Previous work has focused on using multilevel high-level features to improve segmentation accuracy but has ignored the importance of shallow-level features. In addition, multiple upsampling and convolution operations may destroy the semantic feature information contained in the decoder layer. To address these problems, we propose a scale and feature aggregate network (SFA-Net), which can make full use of multiscale high-level feature information and shallow features. In this paper, a residual atrous spatial feature aggregate block (RASF) is embedded at the end of the encoder to learn multiscale information. Furthermore, an attentional feature module (AFF) is proposed to enhance the effective fusion between shallow and high-level features. In addition, we designed the multi-path feature fusion (MPF) block to fuse high-level features of different decoder layers, which aims to learn the relationship between the high-level features of different paths and alleviate the information loss. We apply the network to the three benchmark datasets (DRIVE, STARE, and CHASE_DB1) and compare them with the other current state-of-the-art methods. The experimental results demonstrated that the proposed SFA-Net performs effectively, indicating that the network is suitable for processing some complex medical images.
Collapse
Affiliation(s)
- Jiajia Ni
- College of Internet of Things Engineering, Hohai University, Changzhou, China
| | - Jinhui Liu
- College of Internet of Things Engineering, Hohai University, Changzhou, China
| | - Xuefei Li
- College of Internet of Things Engineering, Hohai University, Changzhou, China
| | - Zhengming Chen
- College of Internet of Things Engineering, Hohai University, Changzhou, China
| |
Collapse
|
65
|
Rodrigues EO, Rodrigues LO, Machado JHP, Casanova D, Teixeira M, Oliva JT, Bernardes G, Liatsis P. Local-Sensitive Connectivity Filter (LS-CF): A Post-Processing Unsupervised Improvement of the Frangi, Hessian and Vesselness Filters for Multimodal Vessel Segmentation. J Imaging 2022; 8:jimaging8100291. [PMID: 36286385 PMCID: PMC9604711 DOI: 10.3390/jimaging8100291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 09/13/2022] [Accepted: 09/16/2022] [Indexed: 11/07/2022] Open
Abstract
A retinal vessel analysis is a procedure that can be used as an assessment of risks to the eye. This work proposes an unsupervised multimodal approach that improves the response of the Frangi filter, enabling automatic vessel segmentation. We propose a filter that computes pixel-level vessel continuity while introducing a local tolerance heuristic to fill in vessel discontinuities produced by the Frangi response. This proposal, called the local-sensitive connectivity filter (LS-CF), is compared against a naive connectivity filter to the baseline thresholded Frangi filter response and to the naive connectivity filter response in combination with the morphological closing and to the current approaches in the literature. The proposal was able to achieve competitive results in a variety of multimodal datasets. It was robust enough to outperform all the state-of-the-art approaches in the literature for the OSIRIX angiographic dataset in terms of accuracy and 4 out of 5 works in the case of the IOSTAR dataset while also outperforming several works in the case of the DRIVE and STARE datasets and 6 out of 10 in the CHASE-DB dataset. For the CHASE-DB, it also outperformed all the state-of-the-art unsupervised methods.
Collapse
Affiliation(s)
- Erick O. Rodrigues
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
- Correspondence:
| | - Lucas O. Rodrigues
- Graduate Program of Sciences Applied to Health Products, Universidade Federal Fluminense (UFF), Niteroi 24241-000, RJ, Brazil
| | - João H. P. Machado
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
| | - Dalcimar Casanova
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
| | - Marcelo Teixeira
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
| | - Jeferson T. Oliva
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
| | - Giovani Bernardes
- Institute of Technological Sciences (ICT), Universidade Federal de Itajuba (UNIFEI), Itabira 35903-087, MG, Brazil
| | - Panos Liatsis
- Department of Electrical Engineering and Computer Science, Khalifa University of Science and Technology, Abu Dhabi P.O. Box 127788, United Arab Emirates
| |
Collapse
|
66
|
Zhang S, Yang G, Qian J, Zhu X, Li J, Li P, He Y, Xu Y, Shao P, Wang Z. A novel 3D deep learning model to automatically demonstrate renal artery segmentation and its validation in nephron-sparing surgery. Front Oncol 2022; 12:997911. [PMID: 36313655 PMCID: PMC9614169 DOI: 10.3389/fonc.2022.997911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 09/28/2022] [Indexed: 12/02/2022] Open
Abstract
Purpose Nephron-sparing surgery (NSS) is a mainstream treatment for localized renal tumors. Segmental renal artery clamping (SRAC) is commonly used in NSS. Automatic and precise segmentations of renal artery trees are required to improve the workflow of SRAC in NSS. In this study, we developed a tridimensional kidney perfusion (TKP) model based on deep learning technique to automatically demonstrate renal artery segmentation, and verified the precision and feasibility during laparoscopic partial nephrectomy (PN). Methods The TKP model was established based on convolutional neural network (CNN), and the precision was validated in porcine models. From April 2018 to January 2020, TKP model was applied in laparoscopic PN in 131 patients with T1a tumors. Demographics, perioperative variables, and data from the TKP models were assessed. Indocyanine green (ICG) with near-infrared fluorescence (NIRF) imaging was applied after clamping and dice coefficient was used to evaluate the precision of the model. Results The precision of the TKP model was validated in porcine models with the mean dice coefficient of 0.82. Laparoscopic PN was successfully performed in all cases with segmental renal artery clamping (SRAC) under TKP model’s guidance. The mean operation time was 100.8 min; the median estimated blood loss was 110 ml. The ischemic regions recorded in NIRF imaging were highly consistent with the perfusion regions in the TKP models (mean dice coefficient = 0.81). Multivariate analysis revealed that the feeding lobar artery number was strongly correlated with tumor size and contact surface area; the supplying segmental arteries number correlated with tumor size. Conclusions Using the CNN technique, the TKP model is developed to automatically present the renal artery trees and precisely delineate the perfusion regions of different segmental arteries. The guidance of the TKP model is feasible and effective in nephron-sparing surgery.
Collapse
Affiliation(s)
- Shaobo Zhang
- Department of Urology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Guanyu Yang
- Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China
| | - Jian Qian
- Department of Urology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Xiaomei Zhu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Jie Li
- Department of Urology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Pu Li
- Department of Urology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Yuting He
- Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China
| | - Yi Xu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Pengfei Shao
- Department of Urology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
- *Correspondence: Pengfei Shao,
| | - Zengjun Wang
- Department of Urology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
67
|
Zhang C, Wu QQ, Hou Y, Wang Q, Zhang GJ, Zhao WB, Wang X, Wang H, Li WG. Ophthalmologic problems correlates with cognitive impairment in patients with Parkinson's disease. Front Neurosci 2022; 16:928980. [PMID: 36278010 PMCID: PMC9583907 DOI: 10.3389/fnins.2022.928980] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 09/08/2022] [Indexed: 11/29/2022] Open
Abstract
Objective Visual impairment is a common non-motor symptom (NMS) in patients with Parkinson's disease (PD) and its implications for cognitive impairment remain controversial. We wished to survey the prevalence of visual impairment in Chinese Parkinson's patients based on the Visual Impairment in Parkinson's Disease Questionnaire (VIPD-Q), identify the pathogens that lead to visual impairment, and develop a predictive model for cognitive impairment risk in Parkinson's based on ophthalmic parameters. Methods A total of 205 patients with Parkinson's disease and 200 age-matched controls completed the VIPD-Q and underwent neuro-ophthalmologic examinations, including ocular fundus photography and optical coherence tomography. We conducted nomogram analysis and the predictive model was summarized using the multivariate logistic and LASSO regression and verified via bootstrap validation. Results One or more ophthalmologic symptoms were present in 57% of patients with Parkinson's disease, compared with 14% of the controls (χ2-test; p < 0.001). The visual impairment questionnaire showed good sensitivity and specificity (area under the curve [AUC] = 0.918, p < 0.001) and a strong correlation with MoCA scores (Pearson r = −0.4652, p < 0.001). Comparing visual impairment scores between pre- and post-deep brain stimulation groups showed that DBS improved visual function (U-test, p < 0.001). The thickness of the retinal nerve fiber layer and vessel percentage area predicted cognitive impairment in PD. Interpretation The study findings provide novel mechanistic insights into visual impairment and cognitive decline in Parkinson's disease. The results inform an effective tool for predicting cognitive deterioration in Parkinson's based on ophthalmic parameters.
Collapse
Affiliation(s)
- Chao Zhang
- Department of Neurosurgery, Qilu Hospital of Shandong University, Jinan, China
- Institute of Brain and Brain-Inspired Science, Shandong University, Jinan, China
| | - Qian-qian Wu
- Department of Neurosurgery, Qilu Hospital of Shandong University, Jinan, China
| | - Ying Hou
- Department of Neurology, Qilu Hospital of Shandong University, Jinan, China
| | - Qi Wang
- Department of Gerontology, Shandong Provincial Qianfoshan Hospital, Jinan, China
| | - Guang-jian Zhang
- Department of Neurology, Weifang People's Hospital, Weifang, China
| | - Wen-bo Zhao
- Institute of Brain and Brain-Inspired Science, Shandong University, Jinan, China
| | - Xu Wang
- Institute of Brain and Brain-Inspired Science, Shandong University, Jinan, China
| | - Hong Wang
- Department of Ophthalmology, Qilu Hospital of Shandong University, Jinan, China
| | - Wei-guo Li
- Department of Neurosurgery, Qilu Hospital of Shandong University, Jinan, China
- *Correspondence: Wei-guo Li
| |
Collapse
|
68
|
da Silva MV, Ouellette J, Lacoste B, Comin CH. An analysis of the influence of transfer learning when measuring the tortuosity of blood vessels. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 225:107021. [PMID: 35914440 DOI: 10.1016/j.cmpb.2022.107021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2022] [Revised: 07/10/2022] [Accepted: 07/11/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Convolutional Neural Networks (CNNs) can provide excellent results regarding the segmentation of blood vessels. One important aspect of CNNs is that they can be trained on large amounts of data and then be made available, for instance, in image processing software. The pre-trained CNNs can then be easily applied in downstream blood vessel characterization tasks, such as the calculation of the length, tortuosity, or caliber of the blood vessels. Yet, it is still unclear if pre-trained CNNs can provide robust, unbiased, results in downstream tasks involving the morphological analysis of blood vessels. Here, we focus on measuring the tortuosity of blood vessels and investigate to which extent CNNs may provide biased tortuosity values even after fine-tuning the network to a new dataset under study. METHODS We develop a procedure for quantifying the influence of CNN pre-training in downstream analyses involving the measurement of morphological properties of blood vessels. Using the methodology, we compare the performance of CNNs that were trained on images containing blood vessels having high tortuosity with CNNs that were trained on blood vessels with low tortuosity and fine-tuned on blood vessels with high tortuosity. The opposite situation is also investigated. RESULTS We show that the tortuosity values obtained by a CNN trained from scratch on a dataset may not agree with those obtained by a fine-tuned network that was pre-trained on a dataset having different tortuosity statistics. In addition, we show that improving the segmentation accuracy does not necessarily lead to better tortuosity estimation. To mitigate the aforementioned issues, we propose the application of data augmentation techniques even in situations where they do not improve segmentation performance. For instance, we found that the application of elastic transformations was enough to prevent an underestimation of 8% of blood vessel tortuosity when applying CNNs to different datasets. CONCLUSIONS The results highlight the importance of developing new methodologies for training CNNs with the specific goal of reducing the error of morphological measurements, as opposed to the traditional approach of using segmentation accuracy as a proxy metric for performance evaluation.
Collapse
Affiliation(s)
- Matheus V da Silva
- Department of Computer Science, Federal University of São Carlos, São Carlos, SP, Brazil
| | - Julie Ouellette
- Department of Cellular and Molecular Medicine, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada; Neuroscience Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
| | - Baptiste Lacoste
- Department of Cellular and Molecular Medicine, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Cesar H Comin
- Department of Computer Science, Federal University of São Carlos, São Carlos, SP, Brazil.
| |
Collapse
|
69
|
Panda NR, Sahoo AK. A Detailed Systematic Review on Retinal Image Segmentation Methods. J Digit Imaging 2022; 35:1250-1270. [PMID: 35508746 PMCID: PMC9582172 DOI: 10.1007/s10278-022-00640-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2021] [Revised: 04/12/2022] [Accepted: 04/14/2022] [Indexed: 11/27/2022] Open
Abstract
The separation of blood vessels in the retina is a major aspect in detecting ailment and is carried out by segregating the retinal blood vessels from the fundus images. Moreover, it helps to provide earlier therapy for deadly diseases and prevent further impacts due to diabetes and hypertension. Many reviews already exist for this problem, but those reviews have presented the analysis of a single framework. Hence, this article on retinal segmentation review has revealed distinct methodologies with diverse frameworks that are utilized for blood vessel separation. The novelty of this review research lies in finding the best neural network model by comparing its efficiency. For that, machine learning (ML) and deep learning (DL) were compared and have been reported as the best model. Moreover, different datasets were used to segment the retinal blood vessels. The execution of each approach is compared based on the performance metrics such as sensitivity, specificity, and accuracy using publically accessible datasets like STARE, DRIVE, ROSE, REFUGE, and CHASE. This article discloses the implementation capacity of distinct techniques implemented for each segmentation method. Finally, the finest accuracy of 98% and sensitivity of 96% were achieved for the technique of Convolution Neural Network with Ranking Support Vector Machine (CNN-rSVM). Moreover, this technique has utilized public datasets to verify efficiency. Hence, the overall review of this article has revealed a method for earlier diagnosis of diseases to deliver earlier therapy.
Collapse
Affiliation(s)
- Nihar Ranjan Panda
- Department of Electronics and Communication Engineering, Silicon Institute of Technology, Bhubaneswar, Orissa, 751024, India.
| | - Ajit Kumar Sahoo
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India
| |
Collapse
|
70
|
Panahi A, Askari Moghadam R, Tarvirdizadeh B, Madani K. Simplified U-Net as a deep learning intelligent medical assistive tool in glaucoma detection. EVOLUTIONARY INTELLIGENCE 2022. [DOI: 10.1007/s12065-022-00775-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
71
|
Nadeem MW, Goh HG, Hussain M, Liew SY, Andonovic I, Khan MA. Deep Learning for Diabetic Retinopathy Analysis: A Review, Research Challenges, and Future Directions. SENSORS (BASEL, SWITZERLAND) 2022; 22:6780. [PMID: 36146130 PMCID: PMC9505428 DOI: 10.3390/s22186780] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 08/02/2022] [Accepted: 08/08/2022] [Indexed: 05/12/2023]
Abstract
Deep learning (DL) enables the creation of computational models comprising multiple processing layers that learn data representations at multiple levels of abstraction. In the recent past, the use of deep learning has been proliferating, yielding promising results in applications across a growing number of fields, most notably in image processing, medical image analysis, data analysis, and bioinformatics. DL algorithms have also had a significant positive impact through yielding improvements in screening, recognition, segmentation, prediction, and classification applications across different domains of healthcare, such as those concerning the abdomen, cardiac, pathology, and retina. Given the extensive body of recent scientific contributions in this discipline, a comprehensive review of deep learning developments in the domain of diabetic retinopathy (DR) analysis, viz., screening, segmentation, prediction, classification, and validation, is presented here. A critical analysis of the relevant reported techniques is carried out, and the associated advantages and limitations highlighted, culminating in the identification of research gaps and future challenges that help to inform the research community to develop more efficient, robust, and accurate DL models for the various challenges in the monitoring and diagnosis of DR.
Collapse
Affiliation(s)
- Muhammad Waqas Nadeem
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), Kampar 31900, Malaysia
| | - Hock Guan Goh
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), Kampar 31900, Malaysia
| | - Muzammil Hussain
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore 54000, Pakistan
| | - Soung-Yue Liew
- Faculty of Information and Communication Technology (FICT), Universiti Tunku Abdul Rahman (UTAR), Kampar 31900, Malaysia
| | - Ivan Andonovic
- Department of Electronic and Electrical Engineering, Royal College Building, University of Strathclyde, 204 George St., Glasgow G1 1XW, UK
| | - Muhammad Adnan Khan
- Pattern Recognition and Machine Learning Lab, Department of Software, Gachon University, Seongnam 13557, Korea
- Faculty of Computing, Riphah School of Computing and Innovation, Riphah International University, Lahore Campus, Lahore 54000, Pakistan
| |
Collapse
|
72
|
Guo S. CSGNet: Cascade semantic guided net for retinal vessel segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
73
|
Sevgi DD, Srivastava SK, Wykoff C, Scott AW, Hach J, O'Connell M, Whitney J, Vasanji A, Reese JL, Ehlers JP. Deep learning-enabled ultra-widefield retinal vessel segmentation with an automated quality-optimized angiographic phase selection tool. Eye (Lond) 2022; 36:1783-1788. [PMID: 34373610 PMCID: PMC9391395 DOI: 10.1038/s41433-021-01661-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2020] [Revised: 05/22/2021] [Accepted: 06/21/2021] [Indexed: 11/09/2022] Open
Abstract
OBJECTIVES To demonstrate the feasibility of a deep learning-based vascular segmentation tool for UWFA and evaluate its ability to automatically identify quality-optimized phase-specific images. METHODS Cumulative retinal vessel areas (RVA) were extracted from all available UWFA frames. Cubic splines were fitted for serial vascular assessment throughout the angiographic phases of eyes with diabetic retinopathy (DR), sickle cell retinopathy (SCR), or normal retinal vasculature. The image with maximum RVA was selected as the optimum early phase. A late phase frame was selected at a minimum of 4 min that most closely mirrored the RVA from the selected early image. Trained image analysts evaluated the selected pairs. RESULTS A total of 13,980 UWFA sequences from 462 sessions were used to evaluate the performance and 1578 UWFA sequences from 66 sessions were used to create cubic splines. Maximum RVA was detected at a mean of 41 ± 15, 47 ± 27, 38 ± 8 s for DR, SCR, and normals respectively. In 85.2% of the sessions, appropriate images for both phases were successfully identified. The individual success rate was 90.7% for early and 94.6% for late frames. CONCLUSIONS Retinal vascular characteristics are highly phased and field-of-view sensitive. Vascular parameters extracted by deep learning algorithms can be used for quality assessment of angiographic images and quality optimized phase selection. Clinical applications of a deep learning-based vascular segmentation and phase selection system might significantly improve the speed, consistency, and objectivity of UWFA evaluation.
Collapse
Affiliation(s)
- Duriye Damla Sevgi
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Sunil K Srivastava
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Charles Wykoff
- Retina Consultants of America, Houston, Texas; Blanton Eye Institute, Houston Methodist Hospital & Weill Cornell Medical College, Houston, TX, USA
| | - Adrienne W Scott
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jenna Hach
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Margaret O'Connell
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Jon Whitney
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | | | - Jamie L Reese
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Justis P Ehlers
- The Tony and Leona Campane Center for Excellence in Image-Guided Surgery and Advanced Imaging Research, Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA.
| |
Collapse
|
74
|
Tan Y, Yang KF, Zhao SX, Li YJ. Retinal Vessel Segmentation With Skeletal Prior and Contrastive Loss. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2238-2251. [PMID: 35320091 DOI: 10.1109/tmi.2022.3161681] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The morphology of retinal vessels is closely associated with many kinds of ophthalmic diseases. Although huge progress in retinal vessel segmentation has been achieved with the advancement of deep learning, some challenging issues remain. For example, vessels can be disturbed or covered by other components presented in the retina (such as optic disc or lesions). Moreover, some thin vessels are also easily missed by current methods. In addition, existing fundus image datasets are generally tiny, due to the difficulty of vessel labeling. In this work, a new network called SkelCon is proposed to deal with these problems by introducing skeletal prior and contrastive loss. A skeleton fitting module is developed to preserve the morphology of the vessels and improve the completeness and continuity of thin vessels. A contrastive loss is employed to enhance the discrimination between vessels and background. In addition, a new data augmentation method is proposed to enrich the training samples and improve the robustness of the proposed model. Extensive validations were performed on several popular datasets (DRIVE, STARE, CHASE, and HRF), recently developed datasets (UoA-DR, IOSTAR, and RC-SLO), and some challenging clinical images (from RFMiD and JSIEC39 datasets). In addition, some specially designed metrics for vessel segmentation, including connectivity, overlapping area, consistency of vessel length, revised sensitivity, specificity, and accuracy were used for quantitative evaluation. The experimental results show that, the proposed model achieves state-of-the-art performance and significantly outperforms compared methods when extracting thin vessels in the regions of lesions or optic disc. Source code is available at https://www.github.com/tyb311/SkelCon.
Collapse
|
75
|
|
76
|
Zhang Q, Sampani K, Xu M, Cai S, Deng Y, Li H, Sun JK, Karniadakis GE. AOSLO-net: A Deep Learning-Based Method for Automatic Segmentation of Retinal Microaneurysms From Adaptive Optics Scanning Laser Ophthalmoscopy Images. Transl Vis Sci Technol 2022; 11:7. [PMID: 35938881 PMCID: PMC9366726 DOI: 10.1167/tvst.11.8.7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 07/02/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose Accurate segmentation of microaneurysms (MAs) from adaptive optics scanning laser ophthalmoscopy (AOSLO) images is crucial for identifying MA morphologies and assessing the hemodynamics inside the MAs. Herein, we introduce AOSLO-net to perform automatic MA segmentation from AOSLO images of diabetic retinas. Method AOSLO-net is composed of a deep neural network based on UNet with a pretrained EfficientNet as the encoder. We have designed customized preprocessing and postprocessing policies for AOSLO images, including generation of multichannel images, de-noising, contrast enhancement, ensemble and union of model predictions, to optimize the MA segmentation. AOSLO-net is trained and tested using 87 MAs imaged from 28 eyes of 20 subjects with varying severity of diabetic retinopathy (DR), which is the largest available AOSLO dataset for MA detection. To avoid the overfitting in the model training process, we augment the training data by flipping, rotating, scaling the original image to increase the diversity of data available for model training. Results The validity of the model is demonstrated by the good agreement between the predictions of AOSLO-net and the MA masks generated by ophthalmologists and skillful trainees on 87 patient-specific MA images. Our results show that AOSLO-net outperforms the state-of-the-art segmentation model (nnUNet) both in accuracy (e.g., intersection over union and Dice scores), as well as computational cost. Conclusions We demonstrate that AOSLO-net provides high-quality of MA segmentation from AOSLO images that enables correct MA morphological classification. Translational Relevance As the first attempt to automatically segment retinal MAs from AOSLO images, AOSLO-net could facilitate the pathological study of DR and help ophthalmologists make disease prognoses.
Collapse
Affiliation(s)
- Qian Zhang
- Division of Applied Mathematics, Brown University, Providence, RI, USA
| | - Konstantina Sampani
- Beetham Eye Institute, Joslin Diabetes Center, Department of Medicine and Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Mengjia Xu
- Division of Applied Mathematics, Brown University, Providence, RI, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Shengze Cai
- Division of Applied Mathematics, Brown University, Providence, RI, USA
| | - Yixiang Deng
- School of Engineering, Brown University, Providence, RI, USA
| | - He Li
- School of Engineering, Brown University, Providence, RI, USA
| | - Jennifer K. Sun
- Beetham Eye Institute, Joslin Diabetes Center, Department of Medicine and Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - George Em Karniadakis
- Division of Applied Mathematics and School of Engineering, Brown University, Providence, RI, USA
| |
Collapse
|
77
|
Su Y, Cheng J, Cao G, Liu H. How to design a deep neural network for retinal vessel segmentation: an empirical study. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103761] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
78
|
Liu W, Yang H, Tian T, Cao Z, Pan X, Xu W, Jin Y, Gao F. Full-Resolution Network and Dual-Threshold Iteration for Retinal Vessel and Coronary Angiograph Segmentation. IEEE J Biomed Health Inform 2022; 26:4623-4634. [PMID: 35788455 DOI: 10.1109/jbhi.2022.3188710] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Vessel segmentation is critical for disease diagnosis and surgical planning. Recently, the vessel segmentation method based on deep learning has achieved outstanding performance. However, vessel segmentation remains challenging due to thin vessels with low contrast that easily lose spatial information in the traditional U-shaped segmentation network. To alleviate this problem, we propose a novel and straightforward full-resolution network (FR-UNet) that expands horizontally and vertically through a multiresolution convolution interactive mechanism while retaining full image resolution. In FR-UNet, the feature aggregation module integrates multiscale feature maps from adjacent stages to supplement high-level contextual information. The modified residual blocks continuously learn multiresolution representations to obtain a pixel-level accuracy prediction map. Moreover, we propose the dual-threshold iterative algorithm (DTI) to extract weak vessel pixels for improving vessel connectivity. The proposed method was evaluated on retinal vessel datasets (DRIVE, CHASE_DB1, and STARE) and coronary angiography datasets (DCA1 and CHUAC). The results demonstrate that FR-UNet outperforms state-of-the-art methods by achieving the highest Sen, AUC, F1, and IOU on most of the above-mentioned datasets with fewer parameters, and that DTI enhances vessel connectivity while greatly improving sensitivity. The code is available at: https://github.com/lseventeen/FR-UNet.
Collapse
|
79
|
Amran D, Artzi M, Aizenstein O, Ben Bashat D, Bermano AH. BV-GAN: 3D time-of-flight magnetic resonance angiography cerebrovascular vessel segmentation using adversarial CNNs. J Med Imaging (Bellingham) 2022; 9:044503. [PMID: 36061214 PMCID: PMC9429992 DOI: 10.1117/1.jmi.9.4.044503] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2022] [Accepted: 08/16/2022] [Indexed: 09/02/2023] Open
Abstract
Purpose: Cerebrovascular vessel segmentation is a key step in the detection of vessel pathology. Brain time-of-flight magnetic resonance angiography (TOF-MRA) is a main method used clinically for imaging of blood vessels using magnetic resonance imaging. This method is primarily used to detect narrowing, blockage of the arteries, and aneurysms. Despite its importance, TOF-MRA interpretation relies mostly on visual, subjective assessment performed by a neuroradiologist and is mostly based on maximum intensity projections reconstruction of the three-dimensional (3D) scan, thus reducing the acquired spatial resolution. Works tackling the central problem of automatically segmenting brain blood vessels typically suffer from memory and imbalance related issues. To address these issues, the spatial context of the segmentation consider by neural networks is typically restricted (e.g., by resolution reduction or analysis of environments of lower dimensions). Although efficient, such solutions hinder the ability of the neural networks to understand the complex 3D structures typical of the cerebrovascular system and to leverage this understanding for decision making. Approach: We propose a brain-vessels generative-adversarial-network (BV-GAN) segmentation model, that better considers connectivity and structural integrity, using prior based attention and adversarial learning techniques. Results: For evaluations, fivefold cross-validation experiments were performed on two datasets. BV-GAN demonstrates consistent improvement of up to 10% in vessel Dice score with each additive designed component to the baseline state-of-the-art models. Conclusions: Potentially, this automated 3D-approach could shorten analysis time, allow for quantitative characterization of vascular structures, and reduce the need to decrease resolution, overall improving diagnosis cerebrovascular vessel disorders.
Collapse
Affiliation(s)
- Dor Amran
- Tel-Aviv University, School of Electrical Engineering, Tel-Aviv, Israel
| | - Moran Artzi
- Tel Aviv Sourasky Medical Center, Sagol Brain Institute, Tel Aviv, Israel
- Tel-Aviv University, Sackler Faculty of Medicine, Tel-Aviv, Israel
- Tel-Aviv University, Sagol School of Neuroscience, Tel-Aviv, Israel
| | - Orna Aizenstein
- Tel-Aviv University, Sackler Faculty of Medicine, Tel-Aviv, Israel
- Tel Aviv Sourasky Medical Center, Neuroradiology Unit, Imaging Department, Tel Aviv, Israel
| | - Dafna Ben Bashat
- Tel Aviv Sourasky Medical Center, Sagol Brain Institute, Tel Aviv, Israel
- Tel-Aviv University, Sackler Faculty of Medicine, Tel-Aviv, Israel
- Tel-Aviv University, Sagol School of Neuroscience, Tel-Aviv, Israel
| | - Amit H. Bermano
- Tel-Aviv University, Blavatnik School of Computer Science, Tel-Aviv, Israel
| |
Collapse
|
80
|
Multi-class nucleus detection and classification using deep convolutional neural network with enhanced high dimensional dissimilarity translation model on cervical cells. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.06.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
81
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
82
|
Zhao J, Lu Y, Zhu S, Li K, Jiang Q, Yang W. Systematic Bibliometric and Visualized Analysis of Research Hotspots and Trends on the Application of Artificial Intelligence in Ophthalmic Disease Diagnosis. Front Pharmacol 2022; 13:930520. [PMID: 35754490 PMCID: PMC9214201 DOI: 10.3389/fphar.2022.930520] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 05/23/2022] [Indexed: 12/02/2022] Open
Abstract
Background: Artificial intelligence (AI) has been used in the research of ophthalmic disease diagnosis, and it may have an impact on medical and ophthalmic practice in the future. This study explores the general application and research frontier of artificial intelligence in ophthalmic disease detection. Methods: Citation data were downloaded from the Web of Science Core Collection database to evaluate the extent of the application of Artificial intelligence in ophthalmic disease diagnosis in publications from 1 January 2012, to 31 December 2021. This information was analyzed using CiteSpace.5.8. R3 and Vosviewer. Results: A total of 1,498 publications from 95 areas were examined, of which the United States was determined to be the most influential country in this research field. The largest cluster labeled "Brownian motion" was used prior to the application of AI for ophthalmic diagnosis from 2007 to 2017, and was an active topic during this period. The burst keywords in the period from 2020 to 2021 were system, disease, and model. Conclusion: The focus of artificial intelligence research in ophthalmic disease diagnosis has transitioned from the development of AI algorithms and the analysis of abnormal eye physiological structure to the investigation of more mature ophthalmic disease diagnosis systems. However, there is a need for further studies in ophthalmology and computer engineering.
Collapse
Affiliation(s)
- Junqiang Zhao
- Department of Nursing, Xinxiang Medical University, Xinxiang, China
| | - Yi Lu
- Department of Nursing, Xinxiang Medical University, Xinxiang, China
| | - Shaojun Zhu
- School of Information Engineering, Huzhou University, Huzhou, China
| | - Keran Li
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Qin Jiang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Weihua Yang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
83
|
Yang D, Zhao H, Han T. Learning feature-rich integrated comprehensive context networks for automated fundus retinal vessel analysis. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
84
|
Sharma P, Ninomiya T, Omodaka K, Takahashi N, Miya T, Himori N, Okatani T, Nakazawa T. A lightweight deep learning model for automatic segmentation and analysis of ophthalmic images. Sci Rep 2022; 12:8508. [PMID: 35595784 PMCID: PMC9122907 DOI: 10.1038/s41598-022-12486-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 05/11/2022] [Indexed: 12/04/2022] Open
Abstract
Detection, diagnosis, and treatment of ophthalmic diseases depend on extraction of information (features and/or their dimensions) from the images. Deep learning (DL) model are crucial for the automation of it. Here, we report on the development of a lightweight DL model, which can precisely segment/detect the required features automatically. The model utilizes dimensionality reduction of image to extract important features, and channel contraction to allow only the required high-level features necessary for reconstruction of segmented feature image. Performance of present model in detection of glaucoma from optical coherence tomography angiography (OCTA) images of retina is high (area under the receiver-operator characteristic curve AUC ~ 0.81). Bland–Altman analysis gave exceptionally low bias (~ 0.00185), and high Pearson’s correlation coefficient (p = 0.9969) between the parameters determined from manual and DL based segmentation. On the same dataset, bias is an order of magnitude higher (~ 0.0694, p = 0.8534) for commercial software. Present model is 10 times lighter than Unet (popular for biomedical image segmentation) and have a better segmentation accuracy and model training reproducibility (based on the analysis of 3670 OCTA images). High dice similarity coefficient (D) for variety of ophthalmic images suggested it’s wider scope in precise segmentation of images even from other fields. Our concept of channel narrowing is not only important for the segmentation problems, but it can also reduce number of parameters significantly in object classification models. Enhanced disease diagnostic accuracy can be achieved for the resource limited devices (such as mobile phone, Nvidia’s Jetson, Raspberry pi) used in self-monitoring, and tele-screening (memory size of trained model ~ 35 MB).
Collapse
Affiliation(s)
- Parmanand Sharma
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan. .,Advanced Research Center for Innovations in Next-Generation Medicine, Tohoku University Graduate School of Medicine, Sendai, Japan.
| | - Takahiro Ninomiya
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Kazuko Omodaka
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan.,Department of Ophthalmic Imaging and Information Analytics, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Naoki Takahashi
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Takehiro Miya
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan.,Department of Ophthalmic Imaging and Information Analytics, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Noriko Himori
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan.,Department of Aging Vision Healthcare, Tohoku University Graduate School of Biomedical Engineering, Sendai, Japan
| | - Takayuki Okatani
- Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| | - Toru Nakazawa
- Department of Ophthalmology, Tohoku University Graduate School of Medicine, Sendai, Japan. .,Advanced Research Center for Innovations in Next-Generation Medicine, Tohoku University Graduate School of Medicine, Sendai, Japan. .,Department of Retinal Disease Control, Tohoku University Graduate School of Medicine, Sendai, Japan. .,Department of Ophthalmic Imaging and Information Analytics, Tohoku University Graduate School of Medicine, Sendai, Japan. .,Department of Advanced Ophthalmic Medicine, Tohoku University Graduate School of Medicine, Sendai, Japan.
| |
Collapse
|
85
|
Backtracking Reconstruction Network for Three-Dimensional Compressed Hyperspectral Imaging. REMOTE SENSING 2022. [DOI: 10.3390/rs14102406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Compressed sensing (CS) has been widely used in hyperspectral (HS) imaging to obtain hyperspectral data at a sub-Nyquist sampling rate, lifting the efficiency of data acquisition. Yet, reconstructing the acquired HS data via iterative algorithms is time consuming, which hinders the real-time application of compressed HS imaging. To alleviate this problem, this paper makes the first attempt to adopt convolutional neural networks (CNNs) to reconstruct three-dimensional compressed HS data by backtracking the entire imaging process, leading to a simple yet effective network, dubbed the backtracking reconstruction network (BTR-Net). Concretely, we leverage the divide-and-conquer method to divide the imaging process based on coded aperture tunable filter (CATF) spectral imager into steps, and build a subnetwork for each step to specialize in its reverse process. Consequently, BTR-Net introduces multiple built-in networks which performs spatial initialization, spatial enhancement, spectral initialization and spatial–spectral enhancement in an independent and sequential manner. Extensive experiments show that BTR-Net can reconstruct compressed HS data quickly and accurately, which outperforms leading iterative algorithms both quantitatively and visually, while having superior resistance to noise.
Collapse
|
86
|
Sadeghibakhi M, Pourreza H, Mahyar H. Multiple Sclerosis Lesions Segmentation Using Attention-Based CNNs in FLAIR Images. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 10:1800411. [PMID: 35711337 PMCID: PMC9191687 DOI: 10.1109/jtehm.2022.3172025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 03/05/2022] [Accepted: 04/08/2022] [Indexed: 11/17/2022]
Abstract
Objective: Multiple Sclerosis (MS) is an autoimmune and demyelinating disease that leads to lesions in the central nervous system. This disease can be tracked and diagnosed using Magnetic Resonance Imaging (MRI). A multitude of multimodality automatic biomedical approaches are used to segment lesions that are not beneficial for patients in terms of cost, time, and usability. The authors of the present paper propose a method employing just one modality (FLAIR image) to segment MS lesions accurately. Methods: A patch-based Convolutional Neural Network (CNN) is designed, inspired by 3D-ResNet and spatial-channel attention module, to segment MS lesions. The proposed method consists of three stages: (1) the Contrast-Limited Adaptive Histogram Equalization (CLAHE) is applied to the original images and concatenated to the extracted edges to create 4D images; (2) the patches of size [Formula: see text] are randomly selected from the 4D images; and (3) the extracted patches are passed into an attention-based CNN which is used to segment the lesions. Finally, the proposed method was compared to previous studies of the same dataset. Results: The current study evaluates the model with a test set of ISIB challenge data. Experimental results illustrate that the proposed approach significantly surpasses existing methods of Dice similarity and Absolute Volume Difference while the proposed method uses just one modality (FLAIR) to segment the lesions. Conclusion: The authors have introduced an automated approach to segment the lesions, which is based on, at most, two modalities as an input. The proposed architecture comprises convolution, deconvolution, and an SCA-VoxRes module as an attention module. The results show, that the proposed method outperforms well compared to other methods.
Collapse
Affiliation(s)
- Mehdi Sadeghibakhi
- MV LaboratoryDepartment of Computer Engineering, Faculty of EngineeringFerdowsi University of MashhadMashhad9177948974Iran
| | - Hamidreza Pourreza
- MV LaboratoryDepartment of Computer Engineering, Faculty of EngineeringFerdowsi University of MashhadMashhad9177948974Iran
| | - Hamidreza Mahyar
- Faculty of Engineering, W Booth School of Engineering Practice and TechnologyMcMaster UniversityHamiltonONL8S 4L8Canada
| |
Collapse
|
87
|
Hussain S, Guo F, Li W, Shen Z. DilUnet: A U-net based architecture for blood vessels segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 218:106732. [PMID: 35279601 DOI: 10.1016/j.cmpb.2022.106732] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 02/24/2022] [Accepted: 03/03/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Retinal image segmentation can help clinicians detect pathological disorders by studying changes in retinal blood vessels. This early detection can help prevent blindness and many other vision impairments. So far, several supervised and unsupervised methods have been proposed for the task of automatic blood vessel segmentation. However, the sensitivity and the robustness of these methods can be improved by correctly classifying more vessel pixels. METHOD We proposed an automatic, retinal blood vessel segmentation method based on the U-net architecture. This end-to-end framework utilizes preprocessing and a data augmentation pipeline for training. The architecture utilizes multiscale input and multioutput modules with improved skip connections and the correct use of dilated convolutions for effective feature extraction. In multiscale input, the input image is scaled down and concatenated with the output of convolutional blocks at different points in the encoder path to ensure the feature transfer of the original image. The multioutput module obtains upsampled outputs from each decoder block that are combined to obtain the final output. Skip paths connect each encoder block with the corresponding decoder block, and the whole architecture utilizes different dilation rates to improve the overall feature extraction. RESULTS The proposed method achieved an accuracy: of 0.9680, 0.9694, and 0.9701; sensitivity of 0.8837, 0.8263, and 0.8713; and Intersection Over Union (IOU) of 0.8698, 0.7951, and 0.8184 with three publicly available datasets: DRIVE, STARE, and CHASE, respectively. An ablation study is performed to show the contribution of each proposed module and technique. CONCLUSION The evaluation metrics revealed that the performance of the proposed method is higher than that of the original U-net and other U-net-based architectures, as well as many other state-of-the-art segmentation techniques, and that the proposed method is robust to noise.
Collapse
Affiliation(s)
- Snawar Hussain
- School of Automation, Central South University, Changsha, Hunan 410083, China
| | - Fan Guo
- School of Automation, Central South University, Changsha, Hunan 410083, China.
| | - Weiqing Li
- School of Automation, Central South University, Changsha, Hunan 410083, China
| | - Ziqi Shen
- School of Automation, Central South University, Changsha, Hunan 410083, China
| |
Collapse
|
88
|
Guo S. LightEyes: A Lightweight Fundus Segmentation Network for Mobile Edge Computing. SENSORS (BASEL, SWITZERLAND) 2022; 22:3112. [PMID: 35590802 PMCID: PMC9104959 DOI: 10.3390/s22093112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 04/15/2022] [Accepted: 04/16/2022] [Indexed: 12/04/2022]
Abstract
Fundus is the only structure that can be observed without trauma to the human body. By analyzing color fundus images, the diagnosis basis for various diseases can be obtained. Recently, fundus image segmentation has witnessed vast progress with the development of deep learning. However, the improvement of segmentation accuracy comes with the complexity of deep models. As a result, these models show low inference speeds and high memory usages when deploying to mobile edges. To promote the deployment of deep fundus segmentation models to mobile devices, we aim to design a lightweight fundus segmentation network. Our observation comes from the fact that high-resolution representations could boost the segmentation of tiny fundus structures, and the classification of small fundus structures depends more on local features. To this end, we propose a lightweight segmentation model called LightEyes. We first design a high-resolution backbone network to learn high-resolution representations, so that the spatial relationship between feature maps can be always retained. Meanwhile, considering high-resolution features means high memory usage; for each layer, we use at most 16 convolutional filters to reduce memory usage and decrease training difficulty. LightEyes has been verified on three kinds of fundus segmentation tasks, including the hard exudate, the microaneurysm, and the vessel, on five publicly available datasets. Experimental results show that LightEyes achieves highly competitive segmentation accuracy and segmentation speed compared with state-of-the-art fundus segmentation models, while running at 1.6 images/s Cambricon-1A speed and 51.3 images/s GPU speed with only 36k parameters.
Collapse
Affiliation(s)
- Song Guo
- School of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an 710055, China
| |
Collapse
|
89
|
State-of-the-art retinal vessel segmentation with minimalistic models. Sci Rep 2022; 12:6174. [PMID: 35418576 PMCID: PMC9007957 DOI: 10.1038/s41598-022-09675-y] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Accepted: 03/10/2022] [Indexed: 01/03/2023] Open
Abstract
The segmentation of retinal vasculature from eye fundus images is a fundamental task in retinal image analysis. Over recent years, increasingly complex approaches based on sophisticated Convolutional Neural Network architectures have been pushing performance on well-established benchmark datasets. In this paper, we take a step back and analyze the real need of such complexity. We first compile and review the performance of 20 different techniques on some popular databases, and we demonstrate that a minimalistic version of a standard U-Net with several orders of magnitude less parameters, carefully trained and rigorously evaluated, closely approximates the performance of current best techniques. We then show that a cascaded extension (W-Net) reaches outstanding performance on several popular datasets, still using orders of magnitude less learnable weights than any previously published work. Furthermore, we provide the most comprehensive cross-dataset performance analysis to date, involving up to 10 different databases. Our analysis demonstrates that the retinal vessel segmentation is far from solved when considering test images that differ substantially from the training data, and that this task represents an ideal scenario for the exploration of domain adaptation techniques. In this context, we experiment with a simple self-labeling strategy that enables moderate enhancement of cross-dataset performance, indicating that there is still much room for improvement in this area. Finally, we test our approach on Artery/Vein and vessel segmentation from OCTA imaging problems, where we again achieve results well-aligned with the state-of-the-art, at a fraction of the model complexity available in recent literature. Code to reproduce the results in this paper is released.
Collapse
|
90
|
Pancreas segmentation by two-view feature learning and multi-scale supervision. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103519] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
91
|
Shen X, Xu J, Jia H, Fan P, Dong F, Yu B, Ren S. Self-attentional microvessel segmentation via squeeze-excitation transformer Unet. Comput Med Imaging Graph 2022; 97:102055. [DOI: 10.1016/j.compmedimag.2022.102055] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 02/17/2022] [Accepted: 03/12/2022] [Indexed: 11/27/2022]
|
92
|
Xu X, Wang Y, Liang Y, Luo S, Wang J, Jiang W, Lai X. Retinal Vessel Automatic Segmentation Using SegNet. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:3117455. [PMID: 35378728 PMCID: PMC8976667 DOI: 10.1155/2022/3117455] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 03/10/2022] [Accepted: 03/12/2022] [Indexed: 12/22/2022]
Abstract
Extracting retinal vessels accurately is very important for diagnosing some diseases such as diabetes retinopathy, hypertension, and cardiovascular. Clinically, experienced ophthalmologists diagnose these diseases through segmenting retinal vessels manually and analysing its structural feature, such as tortuosity and diameter. However, manual segmentation of retinal vessels is a time-consuming and laborious task with strong subjectivity. The automatic segmentation technology of retinal vessels can not only reduce the burden of ophthalmologists but also effectively solve the problem that is a lack of experienced ophthalmologists in remote areas. Therefore, the automatic segmentation technology of retinal vessels is of great significance for clinical auxiliary diagnosis and treatment of ophthalmic diseases. A method using SegNet is proposed in this paper to improve the accuracy of the retinal vessel segmentation. The performance of the retinal vessel segmentation model with SegNet is evaluated on the three public datasets (DRIVE, STARE, and HRF) and achieved accuracy of 0.9518, 0.9683, and 0.9653, sensitivity of 0.7580, 0.7747, and 0.7070, specificity of 0.9804, 0.9910, and 0.9885, F 1 score of 0.7992, 0.8369, and 0.7918, MCC of 0.7749, 0.8227, and 0.7643, and AUC of 0.9750, 0.9893, and 0.9740, respectively. The experimental results showed that the method proposed in this research presented better results than many classical methods studied and may be expected to have clinical application prospects.
Collapse
Affiliation(s)
- Xiaomei Xu
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou 310053, China
| | - Yixin Wang
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou 310053, China
| | - Yu Liang
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou 310053, China
| | - Siyuan Luo
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou 310053, China
| | - Jianqing Wang
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou 310053, China
| | - Weiwei Jiang
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou 310053, China
| | - Xiaobo Lai
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou 310053, China
| |
Collapse
|
93
|
Das D, Biswas SK, Bandyopadhyay S. A critical review on diagnosis of diabetic retinopathy using machine learning and deep learning. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:25613-25655. [PMID: 35342328 PMCID: PMC8940593 DOI: 10.1007/s11042-022-12642-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 06/29/2021] [Accepted: 02/09/2022] [Indexed: 06/12/2023]
Abstract
Diabetic Retinopathy (DR) is a health condition caused due to Diabetes Mellitus (DM). It causes vision problems and blindness due to disfigurement of human retina. According to statistics, 80% of diabetes patients battling from long diabetic period of 15 to 20 years, suffer from DR. Hence, it has become a dangerous threat to the health and life of people. To overcome DR, manual diagnosis of the disease is feasible but overwhelming and cumbersome at the same time and hence requires a revolutionary method. Thus, such a health condition necessitates primary recognition and diagnosis to prevent DR from developing into severe stages and prevent blindness. Innumerable Machine Learning (ML) models are proposed by researchers across the globe, to achieve this purpose. Various feature extraction techniques are proposed for extraction of DR features for early detection. However, traditional ML models have shown either meagre generalization throughout feature extraction and classification for deploying smaller datasets or consumes more of training time causing inefficiency in prediction while using larger datasets. Hence Deep Learning (DL), a new domain of ML, is introduced. DL models can handle a smaller dataset with help of efficient data processing techniques. However, they generally incorporate larger datasets for their deep architectures to enhance performance in feature extraction and image classification. This paper gives a detailed review on DR, its features, causes, ML models, state-of-the-art DL models, challenges, comparisons and future directions, for early detection of DR.
Collapse
Affiliation(s)
- Dolly Das
- National Institute of Technology Silchar, Cachar, Assam India
| | | | | |
Collapse
|
94
|
Jiang Y, Qi S, Meng J, Cui B. SS-net: split and spatial attention network for vessel segmentation of retinal OCT angiography. APPLIED OPTICS 2022; 61:2357-2363. [PMID: 35333254 DOI: 10.1364/ao.451370] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 02/20/2022] [Indexed: 06/14/2023]
Abstract
Optical coherence tomography angiography (OCTA) has been widely used in clinical fields because of its noninvasive, high-resolution qualities. Accurate vessel segmentation on OCTA images plays an important role in disease diagnosis. Most deep learning methods are based on region segmentation, which may lead to inaccurate segmentation for the extremely complex curve structure of retinal vessels. We propose a U-shaped network called SS-Net that is based on the attention mechanism to solve the problem of continuous segmentation of discontinuous vessels of a retinal OCTA. In this SS-Net, the improved SRes Block combines the residual structure and split attention to prevent the disappearance of gradient and gives greater weight to capillary features to form a backbone with an encoder and decoder architecture. In addition, spatial attention is applied to extract key information from spatial dimensions. To enhance the credibility, we use several indicators to evaluate the function of the SS-Net. In two datasets, the important indicators of accuracy reach 0.9258/0.9377, respectively, and a Dice coefficient is achieved, with an improvement of around 3% compared to state-of-the-art models in segmentation.
Collapse
|
95
|
Bhatia S, Alam S, Shuaib M, Hameed Alhameed M, Jeribi F, Alsuwailem RI. Retinal Vessel Extraction via Assisted Multi-Channel Feature Map and U-Net. Front Public Health 2022; 10:858327. [PMID: 35372222 PMCID: PMC8968759 DOI: 10.3389/fpubh.2022.858327] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 02/04/2022] [Indexed: 11/13/2022] Open
Abstract
Early detection of vessels from fundus images can effectively prevent the permanent retinal damages caused by retinopathies such as glaucoma, hyperextension, and diabetes. Concerning the red color of both retinal vessels and background and the vessel's morphological variations, the current vessel detection methodologies fail to segment thin vessels and discriminate them in the regions where permanent retinopathies mainly occur. This research aims to suggest a novel approach to take the benefit of both traditional template-matching methods with recent deep learning (DL) solutions. These two methods are combined in which the response of a Cauchy matched filter is used to replace the noisy red channel of the fundus images. Consequently, a U-shaped fully connected convolutional neural network (U-net) is employed to train end-to-end segmentation of pixels into vessel and background classes. Each preprocessed image is divided into several patches to provide enough training images and speed up the training per each instance. The DRIVE public database has been analyzed to test the proposed method, and metrics such as Accuracy, Precision, Sensitivity and Specificity have been measured for evaluation. The evaluation indicates that the average extraction accuracy of the proposed model is 0.9640 on the employed dataset.
Collapse
Affiliation(s)
- Surbhi Bhatia
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Hofuf, Saudi Arabia
- *Correspondence: Surbhi Bhatia
| | - Shadab Alam
- College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia
| | - Mohammed Shuaib
- College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia
| | | | - Fathe Jeribi
- College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia
| | - Razan Ibrahim Alsuwailem
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Hofuf, Saudi Arabia
| |
Collapse
|
96
|
Xu J, Shen J, Wan C, Jiang Q, Yan Z, Yang W. A Few-Shot Learning-Based Retinal Vessel Segmentation Method for Assisting in the Central Serous Chorioretinopathy Laser Surgery. Front Med (Lausanne) 2022; 9:821565. [PMID: 35308538 PMCID: PMC8927682 DOI: 10.3389/fmed.2022.821565] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 01/28/2022] [Indexed: 12/05/2022] Open
Abstract
BACKGROUND The location of retinal vessels is an important prerequisite for Central Serous Chorioretinopathy (CSC) Laser Surgery, which does not only assist the ophthalmologist in marking the location of the leakage point (LP) on the fundus color image but also avoids the damage of the laser spot to the vessel tissue, as well as the low efficiency of the surgery caused by the absorption of laser energy by retinal vessels. In acquiring an excellent intra- and cross-domain adaptability, the existing deep learning (DL)-based vessel segmentation scheme must be driven by big data, which makes the densely annotated work tedious and costly. METHODS This paper aims to explore a new vessel segmentation method with a few samples and annotations to alleviate the above problems. Firstly, a key solution is presented to transform the vessel segmentation scene into the few-shot learning task, which lays a foundation for the vessel segmentation task with a few samples and annotations. Then, we improve the existing few-shot learning framework as our baseline model to adapt to the vessel segmentation scenario. Next, the baseline model is upgraded from the following three aspects: (1) A multi-scale class prototype extraction technique is designed to obtain more sufficient vessel features for better utilizing the information from the support images; (2) The multi-scale vessel features of the query images, inferred by the support image class prototype information, are gradually fused to provide more effective guidance for the vessel extraction tasks; and (3) A multi-scale attention module is proposed to promote the consideration of the global information in the upgraded model to assist vessel localization. Concurrently, the integrated framework is further conceived to appropriately alleviate the low performance of a single model in the cross-domain vessel segmentation scene, enabling to boost the domain adaptabilities of both the baseline and the upgraded models. RESULTS Extensive experiments showed that the upgraded operation could further improve the performance of vessel segmentation significantly. Compared with the listed methods, both the baseline and the upgraded models achieved competitive results on the three public retinal image datasets (i.e., CHASE_DB, DRIVE, and STARE). In the practical application of private CSC datasets, the integrated scheme partially enhanced the domain adaptabilities of the two proposed models.
Collapse
Affiliation(s)
- Jianguo Xu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jianxin Shen
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Cheng Wan
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Zhipeng Yan
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Weihua Yang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
97
|
Fundus Retinal Vessels Image Segmentation Method Based on Improved U-Net. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2022.03.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
98
|
Untangling Computer-Aided Diagnostic System for Screening Diabetic Retinopathy Based on Deep Learning Techniques. SENSORS 2022; 22:s22051803. [PMID: 35270949 PMCID: PMC8914671 DOI: 10.3390/s22051803] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 02/16/2022] [Accepted: 02/17/2022] [Indexed: 01/27/2023]
Abstract
Diabetic Retinopathy (DR) is a predominant cause of visual impairment and loss. Approximately 285 million worldwide population is affected with diabetes, and one-third of these patients have symptoms of DR. Specifically, it tends to affect the patients with 20 years or more with diabetes, but it can be reduced by early detection and proper treatment. Diagnosis of DR by using manual methods is a time-consuming and expensive task which involves trained ophthalmologists to observe and evaluate DR using digital fundus images of the retina. This study aims to systematically find and analyze high-quality research work for the diagnosis of DR using deep learning approaches. This research comprehends the DR grading, staging protocols and also presents the DR taxonomy. Furthermore, identifies, compares, and investigates the deep learning-based algorithms, techniques, and, methods for classifying DR stages. Various publicly available dataset used for deep learning have also been analyzed and dispensed for descriptive and empirical understanding for real-time DR applications. Our in-depth study shows that in the last few years there has been an increasing inclination towards deep learning approaches. 35% of the studies have used Convolutional Neural Networks (CNNs), 26% implemented the Ensemble CNN (ECNN) and, 13% Deep Neural Networks (DNN) are amongst the most used algorithms for the DR classification. Thus using the deep learning algorithms for DR diagnostics have future research potential for DR early detection and prevention based solution.
Collapse
|
99
|
Wei J, Zhu G, Fan Z, Liu J, Rong Y, Mo J, Li W, Chen X. Genetic U-Net: Automatically Designed Deep Networks for Retinal Vessel Segmentation Using a Genetic Algorithm. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:292-307. [PMID: 34506278 DOI: 10.1109/tmi.2021.3111679] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Recently, many methods based on hand-designed convolutional neural networks (CNNs) have achieved promising results in automatic retinal vessel segmentation. However, these CNNs remain constrained in capturing retinal vessels in complex fundus images. To improve their segmentation performance, these CNNs tend to have many parameters, which may lead to overfitting and high computational complexity. Moreover, the manual design of competitive CNNs is time-consuming and requires extensive empirical knowledge. Herein, a novel automated design method, called Genetic U-Net, is proposed to generate a U-shaped CNN that can achieve better retinal vessel segmentation but with fewer architecture-based parameters, thereby addressing the above issues. First, we devised a condensed but flexible search space based on a U-shaped encoder-decoder. Then, we used an improved genetic algorithm to identify better-performing architectures in the search space and investigated the possibility of finding a superior network architecture with fewer parameters. The experimental results show that the architecture obtained using the proposed method offered a superior performance with less than 1% of the number of the original U-Net parameters in particular and with significantly fewer parameters than other state-of-the-art models. Furthermore, through in-depth investigation of the experimental results, several effective operations and patterns of networks to generate superior retinal vessel segmentations were identified. The codes of this work are available at https://github.com/96jhwei/Genetic-U-Net.
Collapse
|
100
|
Li X, Bala R, Monga V. Robust Deep 3D Blood Vessel Segmentation Using Structural Priors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1271-1284. [PMID: 34990361 DOI: 10.1109/tip.2021.3139241] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep learning has enabled significant improvements in the accuracy of 3D blood vessel segmentation. Open challenges remain in scenarios where labeled 3D segmentation maps for training are severely limited, as is often the case in practice, and in ensuring robustness to noise. Inspired by the observation that 3D vessel structures project onto 2D image slices with informative and unique edge profiles, we propose a novel deep 3D vessel segmentation network guided by edge profiles. Our network architecture comprises a shared encoder and two decoders that learn segmentation maps and edge profiles jointly. 3D context is mined in both the segmentation and edge prediction branches by employing bidirectional convolutional long-short term memory (BCLSTM) modules. 3D features from the two branches are concatenated to facilitate learning of the segmentation map. As a key contribution, we introduce new regularization terms that: a) capture the local homogeneity of 3D blood vessel volumes in the presence of biomarkers; and b) ensure performance robustness to domain-specific noise by suppressing false positive responses. Experiments on benchmark datasets with ground truth labels reveal that the proposed approach outperforms state-of-the-art techniques on standard measures such as DICE overlap and mean Intersection-over-Union. The performance gains of our method are even more pronounced when training is limited. Furthermore, the computational cost of our network inference is among the lowest compared with state-of-the-art.
Collapse
|