1
|
Fhima J, Van Eijgen J, Billen Moulin-Romsée MI, Brackenier H, Kulenovic H, Debeuf V, Vangilbergen M, Freiman M, Stalmans I, Behar JA. LUNet: deep learning for the segmentation of arterioles and venules in high resolution fundus images. Physiol Meas 2024; 45:055002. [PMID: 38599224 DOI: 10.1088/1361-6579/ad3d28] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 04/10/2024] [Indexed: 04/12/2024]
Abstract
Objective.This study aims to automate the segmentation of retinal arterioles and venules (A/V) from digital fundus images (DFI), as changes in the spatial distribution of retinal microvasculature are indicative of cardiovascular diseases, positioning the eyes as windows to cardiovascular health.Approach.We utilized active learning to create a new DFI dataset with 240 crowd-sourced manual A/V segmentations performed by 15 medical students and reviewed by an ophthalmologist. We then developed LUNet, a novel deep learning architecture optimized for high-resolution A/V segmentation. The LUNet model features a double dilated convolutional block to widen the receptive field and reduce parameter count, alongside a high-resolution tail to refine segmentation details. A custom loss function was designed to prioritize the continuity of blood vessel segmentation.Main Results.LUNet significantly outperformed three benchmark A/V segmentation algorithms both on a local test set and on four external test sets that simulated variations in ethnicity, comorbidities and annotators.Significance.The release of the new datasets and the LUNet model (www.aimlab-technion.com/lirot-ai) provides a valuable resource for the advancement of retinal microvasculature analysis. The improvements in A/V segmentation accuracy highlight LUNet's potential as a robust tool for diagnosing and understanding cardiovascular diseases through retinal imaging.
Collapse
Affiliation(s)
- Jonathan Fhima
- Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
- Department of Applied Mathematics, Technion-IIT, Haifa, Israel
| | - Jan Van Eijgen
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium
- Department of Ophthalmology, University Hospitals UZ Leuven, Leuven, Belgium
| | - Marie-Isaline Billen Moulin-Romsée
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium
- Department of Ophthalmology, University Hospitals UZ Leuven, Leuven, Belgium
| | - Heloïse Brackenier
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Hana Kulenovic
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Valérie Debeuf
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Marie Vangilbergen
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Moti Freiman
- Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
| | - Ingeborg Stalmans
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Leuven, Belgium
- Department of Ophthalmology, University Hospitals UZ Leuven, Leuven, Belgium
| | - Joachim A Behar
- Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
| |
Collapse
|
2
|
Yap MH, Cassidy B, Byra M, Liao TY, Yi H, Galdran A, Chen YH, Brüngel R, Koitka S, Friedrich CM, Lo YW, Yang CH, Li K, Lao Q, Ballester MAG, Carneiro G, Ju YJ, Huang JD, Pappachan JM, Reeves ND, Chandrabalan V, Dancey D, Kendrick C. Diabetic foot ulcers segmentation challenge report: Benchmark and analysis. Med Image Anal 2024; 94:103153. [PMID: 38569380 DOI: 10.1016/j.media.2024.103153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 01/30/2024] [Accepted: 03/20/2024] [Indexed: 04/05/2024]
Abstract
Monitoring the healing progress of diabetic foot ulcers is a challenging process. Accurate segmentation of foot ulcers can help podiatrists to quantitatively measure the size of wound regions to assist prediction of healing status. The main challenge in this field is the lack of publicly available manual delineation, which can be time consuming and laborious. Recently, methods based on deep learning have shown excellent results in automatic segmentation of medical images, however, they require large-scale datasets for training, and there is limited consensus on which methods perform the best. The 2022 Diabetic Foot Ulcers segmentation challenge was held in conjunction with the 2022 International Conference on Medical Image Computing and Computer Assisted Intervention, which sought to address these issues and stimulate progress in this research domain. A training set of 2000 images exhibiting diabetic foot ulcers was released with corresponding segmentation ground truth masks. Of the 72 (approved) requests from 47 countries, 26 teams used this data to develop fully automated systems to predict the true segmentation masks on a test set of 2000 images, with the corresponding ground truth segmentation masks kept private. Predictions from participating teams were scored and ranked according to their average Dice similarity coefficient of the ground truth masks and prediction masks. The winning team achieved a Dice of 0.7287 for diabetic foot ulcer segmentation. This challenge has now entered a live leaderboard stage where it serves as a challenging benchmark for diabetic foot ulcer segmentation.
Collapse
Affiliation(s)
- Moi Hoon Yap
- Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester M1 5GD, United Kingdom; Lancashire Teaching Hospitals NHS Trust, Preston, PR2 9HT, United Kingdom.
| | - Bill Cassidy
- Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester M1 5GD, United Kingdom
| | - Michal Byra
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland; RIKEN Center for Brain Science, Wako, Japan
| | - Ting-Yu Liao
- Department of Computer Science, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan
| | - Huahui Yi
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Adrian Galdran
- BCN Medtech, Universitat Pompeu Fabra, Barcelona, Spain; AIML, University of Adelaide, Australia
| | - Yung-Han Chen
- Institute of Electronics, National Yang Ming Chiao Tung University, No. 1001, University Road, Hsinchu 300, Taiwan
| | - Raphael Brüngel
- Department of Computer Science, University of Applied Sciences and Arts Dortmund (FH Dortmund), Emil-Figge-Str. 42, 44227 Dortmund, Germany; Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), University Hospital Essen, Zweigertstr. 37, 45130 Essen, Germany; Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstr. 2, 45131 Essen, Germany
| | - Sven Koitka
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstr. 2, 45131 Essen, Germany; Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstr. 55, 45147 Essen, Germany
| | - Christoph M Friedrich
- Department of Computer Science, University of Applied Sciences and Arts Dortmund (FH Dortmund), Emil-Figge-Str. 42, 44227 Dortmund, Germany; Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), University Hospital Essen, Zweigertstr. 37, 45130 Essen, Germany
| | - Yu-Wen Lo
- Department of Computer Science, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan
| | - Ching-Hui Yang
- Department of Computer Science, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan
| | - Kang Li
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Qicheng Lao
- School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | | | | | - Yi-Jen Ju
- Institute of Electronics, National Yang Ming Chiao Tung University, No. 1001, University Road, Hsinchu 300, Taiwan
| | - Juinn-Dar Huang
- Institute of Electronics, National Yang Ming Chiao Tung University, No. 1001, University Road, Hsinchu 300, Taiwan
| | - Joseph M Pappachan
- Lancashire Teaching Hospitals NHS Trust, Preston, PR2 9HT, United Kingdom; Department of Life Sciences, Manchester Metropolitan University, Manchester, M1 5GD, United Kingdom
| | - Neil D Reeves
- Department of Life Sciences, Manchester Metropolitan University, Manchester, M1 5GD, United Kingdom
| | | | - Darren Dancey
- Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester M1 5GD, United Kingdom
| | - Connah Kendrick
- Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester M1 5GD, United Kingdom
| |
Collapse
|
3
|
Zhou Y, Xu M, Hu Y, Blumberg SB, Zhao A, Wagner SK, Keane PA, Alexander DC. CF-Loss: Clinically-relevant feature optimised loss function for retinal multi-class vessel segmentation and vascular feature measurement. Med Image Anal 2024; 93:103098. [PMID: 38320370 DOI: 10.1016/j.media.2024.103098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 05/22/2023] [Accepted: 01/30/2024] [Indexed: 02/08/2024]
Abstract
Characterising clinically-relevant vascular features, such as vessel density and fractal dimension, can benefit biomarker discovery and disease diagnosis for both ophthalmic and systemic diseases. In this work, we explicitly encode vascular features into an end-to-end loss function for multi-class vessel segmentation, categorising pixels into artery, vein, uncertain pixels, and background. This clinically-relevant feature optimised loss function (CF-Loss) regulates networks to segment accurate multi-class vessel maps that produce precise vascular features. Our experiments first verify that CF-Loss significantly improves both multi-class vessel segmentation and vascular feature estimation, with two standard segmentation networks, on three publicly available datasets. We reveal that pixel-based segmentation performance is not always positively correlated with accuracy of vascular features, thus highlighting the importance of optimising vascular features directly via CF-Loss. Finally, we show that improved vascular features from CF-Loss, as biomarkers, can yield quantitative improvements in the prediction of ischaemic stroke, a real-world clinical downstream task. The code is available at https://github.com/rmaphoh/feature-loss.
Collapse
Affiliation(s)
- Yukun Zhou
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London EC1V 9EL, UK; Institute of Ophthalmology, University College London, London EC1V 9EL, UK.
| | - MouCheng Xu
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TS, UK
| | - Stefano B Blumberg
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Computer Science, University College London, London WC1E 6BT, UK
| | - An Zhao
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Computer Science, University College London, London WC1E 6BT, UK
| | - Siegfried K Wagner
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London EC1V 9EL, UK; Institute of Ophthalmology, University College London, London EC1V 9EL, UK
| | - Pearse A Keane
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London EC1V 9EL, UK; Institute of Ophthalmology, University College London, London EC1V 9EL, UK
| | - Daniel C Alexander
- Centre for Medical Image Computing, University College London, London WC1V 6LJ, UK; Department of Computer Science, University College London, London WC1E 6BT, UK
| |
Collapse
|
4
|
Yap BP, Ng BK. Coarse-to-fine visual representation learning for medical images via class activation maps. Comput Biol Med 2024; 171:108203. [PMID: 38430741 DOI: 10.1016/j.compbiomed.2024.108203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 01/29/2024] [Accepted: 02/19/2024] [Indexed: 03/05/2024]
Abstract
The value of coarsely labeled datasets in learning transferable representations for medical images is investigated in this work. Compared to fine labels which require meticulous effort to annotate, coarse labels can be acquired at a significantly lower cost and can provide useful training signals for data-hungry deep neural networks. We consider coarse labels in the form of binary labels differentiating a normal (healthy) image from an abnormal (diseased) image and propose CAMContrast, a two-stage representation learning framework for medical images. Using class activation maps, CAMContrast makes use of the binary labels to generate heatmaps as positive views for contrastive representation learning. Specifically, the learning objective is optimized to maximize the agreement within fixed crops of image-heatmap pair to learn fine-grained representations that are generalizable to different downstream tasks. We empirically validate the transfer learning performance of CAMContrast on several public datasets, covering classification and segmentation tasks on fundus photographs and chest X-ray images. The experimental results showed that our method outperforms other self-supervised and supervised pretrain methods in terms of data efficiency and downstream performance.
Collapse
Affiliation(s)
- Boon Peng Yap
- School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, 639798, Singapore; Centre for OptoElectronics and Biophotonics, Nanyang Technological University, 50 Nanyang Ave, 639798, Singapore.
| | - Beng Koon Ng
- School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Ave, 639798, Singapore; Centre for OptoElectronics and Biophotonics, Nanyang Technological University, 50 Nanyang Ave, 639798, Singapore.
| |
Collapse
|
5
|
Alsayat A, Elmezain M, Alanazi S, Alruily M, Mostafa AM, Said W. Multi-Layer Preprocessing and U-Net with Residual Attention Block for Retinal Blood Vessel Segmentation. Diagnostics (Basel) 2023; 13:3364. [PMID: 37958260 PMCID: PMC10648654 DOI: 10.3390/diagnostics13213364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/21/2023] [Accepted: 10/30/2023] [Indexed: 11/15/2023] Open
Abstract
Retinal blood vessel segmentation is a valuable tool for clinicians to diagnose conditions such as atherosclerosis, glaucoma, and age-related macular degeneration. This paper presents a new framework for segmenting blood vessels in retinal images. The framework has two stages: a multi-layer preprocessing stage and a subsequent segmentation stage employing a U-Net with a multi-residual attention block. The multi-layer preprocessing stage has three steps. The first step is noise reduction, employing a U-shaped convolutional neural network with matrix factorization (CNN with MF) and detailed U-shaped U-Net (D_U-Net) to minimize image noise, culminating in the selection of the most suitable image based on the PSNR and SSIM values. The second step is dynamic data imputation, utilizing multiple models for the purpose of filling in missing data. The third step is data augmentation through the utilization of a latent diffusion model (LDM) to expand the training dataset size. The second stage of the framework is segmentation, where the U-Nets with a multi-residual attention block are used to segment the retinal images after they have been preprocessed and noise has been removed. The experiments show that the framework is effective at segmenting retinal blood vessels. It achieved Dice scores of 95.32, accuracy of 93.56, precision of 95.68, and recall of 95.45. It also achieved efficient results in removing noise using CNN with matrix factorization (MF) and D-U-NET according to values of PSNR and SSIM for (0.1, 0.25, 0.5, and 0.75) levels of noise. The LDM achieved an inception score of 13.6 and an FID of 46.2 in the augmentation step.
Collapse
Affiliation(s)
- Ahmed Alsayat
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Saudi Arabia; (S.A.); (M.A.)
| | - Mahmoud Elmezain
- Computer Science Division, Faculty of Science, Tanta University, Tanta 31527, Egypt;
- Computer Science Department, College of Computer Science and Engineering, Taibah University, Yanbu 966144, Saudi Arabia
| | - Saad Alanazi
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Saudi Arabia; (S.A.); (M.A.)
| | - Meshrif Alruily
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Saudi Arabia; (S.A.); (M.A.)
| | - Ayman Mohamed Mostafa
- Information Systems Department, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Saudi Arabia
| | - Wael Said
- Computer Science Department, Faculty of Computers and Informatics, Zagazig University, Zagazig 44511, Egypt;
- Computer Science Department, College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
| |
Collapse
|
6
|
Ye Z, Liu Y, Jing T, He Z, Zhou L. A High-Resolution Network with Strip Attention for Retinal Vessel Segmentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:8899. [PMID: 37960597 PMCID: PMC10650600 DOI: 10.3390/s23218899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 10/27/2023] [Accepted: 10/30/2023] [Indexed: 11/15/2023]
Abstract
Accurate segmentation of retinal vessels is an essential prerequisite for the subsequent analysis of fundus images. Recently, a number of methods based on deep learning have been proposed and shown to demonstrate promising segmentation performance, especially U-Net and its variants. However, tiny vessels and low-contrast vessels are hard to detect due to the issues of a loss of spatial details caused by consecutive down-sample operations and inadequate fusion of multi-level features caused by vanilla skip connections. To address these issues and enhance the segmentation precision of retinal vessels, we propose a novel high-resolution network with strip attention. Instead of the U-Net-shaped architecture, the proposed network follows an HRNet-shaped architecture as the basic network, learning high-resolution representations throughout the training process. In addition, a strip attention module including a horizontal attention mechanism and a vertical attention mechanism is designed to obtain long-range dependencies in the horizontal and vertical directions by calculating the similarity between each pixel and all pixels in the same row and the same column, respectively. For effective multi-layer feature fusion, we incorporate the strip attention module into the basic network to dynamically guide adjacent hierarchical features. Experimental results on the DRIVE and STARE datasets show that the proposed method can extract more tiny vessels and low-contrast vessels compared with existing mainstream methods, achieving accuracies of 96.16% and 97.08% and sensitivities of 82.68% and 89.36%, respectively. The proposed method has the potential to aid in the analysis of fundus images.
Collapse
Affiliation(s)
- Zhipin Ye
- Research Center of Fluid Machinery Engineering & Technology, Jiangsu University, Zhenjiang 212013, China
| | - Yingqian Liu
- Research Center of Fluid Machinery Engineering & Technology, Jiangsu University, Zhenjiang 212013, China
| | - Teng Jing
- Research Center of Fluid Machinery Engineering & Technology, Jiangsu University, Zhenjiang 212013, China
| | - Zhaoming He
- Department of Mechanical Engineering, Texas Tech University, Lubbock, TX 79411, USA
| | - Ling Zhou
- Research Center of Fluid Machinery Engineering & Technology, Jiangsu University, Zhenjiang 212013, China
| |
Collapse
|
7
|
Khosravi P, Huck NA, Shahraki K, Hunter SC, Danza CN, Kim SY, Forbes BJ, Dai S, Levin AV, Binenbaum G, Chang PD, Suh DW. Deep Learning Approach for Differentiating Etiologies of Pediatric Retinal Hemorrhages: A Multicenter Study. Int J Mol Sci 2023; 24:15105. [PMID: 37894785 PMCID: PMC10606803 DOI: 10.3390/ijms242015105] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 09/29/2023] [Accepted: 10/10/2023] [Indexed: 10/29/2023] Open
Abstract
Retinal hemorrhages in pediatric patients can be a diagnostic challenge for ophthalmologists. These hemorrhages can occur due to various underlying etiologies, including abusive head trauma, accidental trauma, and medical conditions. Accurate identification of the etiology is crucial for appropriate management and legal considerations. In recent years, deep learning techniques have shown promise in assisting healthcare professionals in making more accurate and timely diagnosis of a variety of disorders. We explore the potential of deep learning approaches for differentiating etiologies of pediatric retinal hemorrhages. Our study, which spanned multiple centers, analyzed 898 images, resulting in a final dataset of 597 retinal hemorrhage fundus photos categorized into medical (49.9%) and trauma (50.1%) etiologies. Deep learning models, specifically those based on ResNet and transformer architectures, were applied; FastViT-SA12, a hybrid transformer model, achieved the highest accuracy (90.55%) and area under the receiver operating characteristic curve (AUC) of 90.55%, while ResNet18 secured the highest sensitivity value (96.77%) on an independent test dataset. The study highlighted areas for optimization in artificial intelligence (AI) models specifically for pediatric retinal hemorrhages. While AI proves valuable in diagnosing these hemorrhages, the expertise of medical professionals remains irreplaceable. Collaborative efforts between AI specialists and pediatric ophthalmologists are crucial to fully harness AI's potential in diagnosing etiologies of pediatric retinal hemorrhages.
Collapse
Affiliation(s)
- Pooya Khosravi
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
- Donald Bren School of Information and Computer Sciences, University of California, Irvine, CA 92697, USA;
| | - Nolan A. Huck
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
| | - Kourosh Shahraki
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
| | - Stephen C. Hunter
- School of Medicine, University of California, 900 University Ave, Riverside, CA 92521, USA;
| | - Clifford Neil Danza
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
| | - So Young Kim
- Department of Ophthalmology, College of Medicine, Soonchunhyang University, Cheonan 31151, Chungcheongnam-do, Republic of Korea;
| | - Brian J. Forbes
- Division of Ophthalmology, Children’s Hospital of Philadelphia, Philadelphia, PA 19104, USA; (B.J.F.); (G.B.)
| | - Shuan Dai
- Department of Ophthalmology, Queensland Children’s Hospital, South Brisbane, QLD 4101, Australia;
| | - Alex V. Levin
- Department of Ophthalmology, Flaum Eye Institute, Golisano Children’s Hospital, Rochester, NY 14642, USA;
| | - Gil Binenbaum
- Division of Ophthalmology, Children’s Hospital of Philadelphia, Philadelphia, PA 19104, USA; (B.J.F.); (G.B.)
| | - Peter D. Chang
- Donald Bren School of Information and Computer Sciences, University of California, Irvine, CA 92697, USA;
- Department of Radiological Sciences, School of Medicine, University of California, Irvine, CA 92697, USA
| | - Donny W. Suh
- Department of Ophthalmology, School of Medicine, University of California, Irvine, CA 92697, USA; (P.K.); (N.A.H.); (K.S.); (C.N.D.)
- Gavin Herbert Eye Institute, University of California, Irvine, CA 92697, USA
| |
Collapse
|
8
|
Khan TM, Naqvi SS, Robles-Kelly A, Razzak I. Retinal vessel segmentation via a Multi-resolution Contextual Network and adversarial learning. Neural Netw 2023; 165:310-320. [PMID: 37327578 DOI: 10.1016/j.neunet.2023.05.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 04/24/2023] [Accepted: 05/17/2023] [Indexed: 06/18/2023]
Abstract
Timely and affordable computer-aided diagnosis of retinal diseases is pivotal in precluding blindness. Accurate retinal vessel segmentation plays an important role in disease progression and diagnosis of such vision-threatening diseases. To this end, we propose a Multi-resolution Contextual Network (MRC-Net) that addresses these issues by extracting multi-scale features to learn contextual dependencies between semantically different features and using bi-directional recurrent learning to model former-latter and latter-former dependencies. Another key idea is training in adversarial settings for foreground segmentation improvement through optimization of the region-based scores. This novel strategy boosts the performance of the segmentation network in terms of the Dice score (and correspondingly Jaccard index) while keeping the number of trainable parameters comparatively low. We have evaluated our method on three benchmark datasets, including DRIVE, STARE, and CHASE, demonstrating its superior performance as compared with competitive approaches elsewhere in the literature.
Collapse
Affiliation(s)
- Tariq M Khan
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia.
| | - Syed S Naqvi
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Pakistan
| | - Antonio Robles-Kelly
- School of Information Technology, Faculty of Science Engineering & Built Environment, Deakin University, Locked Bag 20000, Geelong, Australia; Defence Science and Technology Group, 5111, Edinburgh, SA, Australia
| | - Imran Razzak
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
9
|
Saha D, Hadule S, Giri L. A deep learning approach for automation in neurite tracing and cell size estimation from differential contrast images under healthy and hypoxic condition. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083674 DOI: 10.1109/embc40787.2023.10340948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Chronic hypoxia is known to be a major cause of neurite length retraction followed be degeneration. Specifically, laser scanning confocal microscopy (LSCM) based-contrast imaging is used for monitoring neuronal morphology under hypoxic condition. Although imaging of neurons using LSCM via differential contrast imaging (DIC) is a powerful tool to identify the neuronal states under degenerative condition, fully automated quantification of neurite length and cell shape remains challenging. In this context, we propose an integrated framework that combines panorama imaging of neuronal morphology using LSCM, and deep learning model that allows automated tracing of neurites and cell shape. First, we establish an in vitro hypoxic model using cobalt chloride treatment of N2A cells and perform the large-scale imaging using DIC optics. Next, we tested the performance of U-Net, U-Net++ and FCN architecture using DIC images, where U-Net and U-Net++ demonstrates robustness and accuracy in tracing neurite length and segmentation of cell shape. The result shows that the U-Net++ is able to depict the difference in cell size and neurite length for control and chronic hypoxic condition. The proposed method was also validated and compared with other CNN models including FCN and U-Net. Moreover, the analysis indicates a significant alteration of cell shape and neurite length under hypoxic condition via deep-learning based automated cell segmentation.Clinical Relevance-The proposed framework assumes importance where quantification of neurite length and cell shape from a large dataset remains challenging due to time-consuming manual segmentation by experts. Specially, the framework based on labeling of a small dataset (15-20 images) can be used to identify the neuronal state under neurodegeneration and image-based assessment of neuroprotective drugs.
Collapse
|
10
|
An evolutionary U-shaped network for Retinal Vessel Segmentation using Binary Teaching–Learning-Based Optimization. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
|
11
|
Islam MT, Khan HA, Naveed K, Nauman A, Gulfam SM, Kim SW. LUVS-Net: A Lightweight U-Net Vessel Segmentor for Retinal Vasculature Detection in Fundus Images. ELECTRONICS 2023; 12:1786. [DOI: 10.3390/electronics12081786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
This paper presents LUVS-Net, which is a lightweight convolutional network for retinal vessel segmentation in fundus images that is designed for resource-constrained devices that are typically unable to meet the computational requirements of large neural networks. The computational challenges arise due to low-quality retinal images, wide variance in image acquisition conditions and disparities in intensity. Consequently, the training of existing segmentation methods requires a multitude of trainable parameters for the training of networks, resulting in computational complexity. The proposed Lightweight U-Net for Vessel Segmentation Network (LUVS-Net) can achieve high segmentation performance with only a few trainable parameters. This network uses an encoder–decoder framework in which edge data are transposed from the first layers of the encoder to the last layer of the decoder, massively improving the convergence latency. Additionally, LUVS-Net’s design allows for a dual-stream information flow both inside as well as outside of the encoder–decoder pair. The network width is enhanced using group convolutions, which allow the network to learn a larger number of low- and intermediate-level features. Spatial information loss is minimized using skip connections, and class imbalances are mitigated using dice loss for pixel-wise classification. The performance of the proposed network is evaluated on the publicly available retinal blood vessel datasets DRIVE, CHASE_DB1 and STARE. LUVS-Net proves to be quite competitive, outperforming alternative state-of-the-art segmentation methods and achieving comparable accuracy using trainable parameters that are reduced by two to three orders of magnitude compared with those of comparative state-of-the-art methods.
Collapse
Affiliation(s)
- Muhammad Talha Islam
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan
| | - Haroon Ahmed Khan
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan
| | - Khuram Naveed
- Department of Electrical and Computer Engineering, COMSATS University Islamabad (CUI), Islamabad 45550, Pakistan
- Department of Electrical and Computer Engineering, Aarhus University, 8000 Aarhus, Denmark
| | - Ali Nauman
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si 38541, Republic of Korea
| | - Sardar Muhammad Gulfam
- Department of Electrical and Computer Engineering, Abbottabad Campus, COMSATS University Islamabad (CUI), Abbottabad 22060, Pakistan
| | - Sung Won Kim
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si 38541, Republic of Korea
| |
Collapse
|
12
|
Morano J, Hervella ÁS, Rouco J, Novo J, Fernández-Vigo JI, Ortega M. Weakly-supervised detection of AMD-related lesions in color fundus images using explainable deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107296. [PMID: 36481530 DOI: 10.1016/j.cmpb.2022.107296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 11/16/2022] [Accepted: 11/29/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVES Age-related macular degeneration (AMD) is a degenerative disorder affecting the macula, a key area of the retina for visual acuity. Nowadays, AMD is the most frequent cause of blindness in developed countries. Although some promising treatments have been proposed that effectively slow down its development, their effectiveness significantly diminishes in the advanced stages. This emphasizes the importance of large-scale screening programs for early detection. Nevertheless, implementing such programs for a disease like AMD is usually unfeasible, since the population at risk is large and the diagnosis is challenging. For the characterization of the disease, clinicians have to identify and localize certain retinal lesions. All this motivates the development of automatic diagnostic methods. In this sense, several works have achieved highly positive results for AMD detection using convolutional neural networks (CNNs). However, none of them incorporates explainability mechanisms linking the diagnosis to its related lesions to help clinicians to better understand the decisions of the models. This is specially relevant, since the absence of such mechanisms limits the application of automatic methods in the clinical practice. In that regard, we propose an explainable deep learning approach for the diagnosis of AMD via the joint identification of its associated retinal lesions. METHODS In our proposal, a CNN with a custom architectural setting is trained end-to-end for the joint identification of AMD and its associated retinal lesions. With the proposed setting, the lesion identification is directly derived from independent lesion activation maps; then, the diagnosis is obtained from the identified lesions. The training is performed end-to-end using image-level labels. Thus, lesion-specific activation maps are learned in a weakly-supervised manner. The provided lesion information is of high clinical interest, as it allows clinicians to assess the developmental stage of the disease. Additionally, the proposed approach allows to explain the diagnosis obtained by the models directly from the identified lesions and their corresponding activation maps. The training data necessary for the approach can be obtained without much extra work on the part of clinicians, since the lesion information is habitually present in medical records. This is an important advantage over other methods, including fully-supervised lesion segmentation methods, which require pixel-level labels whose acquisition is arduous. RESULTS The experiments conducted in 4 different datasets demonstrate that the proposed approach is able to identify AMD and its associated lesions with satisfactory performance. Moreover, the evaluation of the lesion activation maps shows that the models trained using the proposed approach are able to identify the pathological areas within the image and, in most cases, to correctly determine to which lesion they correspond. CONCLUSIONS The proposed approach provides meaningful information-lesion identification and lesion activation maps-that conveniently explains and complements the diagnosis, and is of particular interest to clinicians for the diagnostic process. Moreover, the data needed to train the networks using the proposed approach is commonly easy to obtain, what represents an important advantage in fields with particularly scarce data, such as medical imaging.
Collapse
Affiliation(s)
- José Morano
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruńa (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - Álvaro S Hervella
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruńa (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - José Rouco
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruńa (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - Jorge Novo
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruńa (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - José I Fernández-Vigo
- Department of Ophthalmology, Hospital Clínico San Carlos, Instituto de Investigación Sanitaria (IdISSC), Madrid, Spain; Department of Ophthalmology, Centro Internacional de Oftalmología Avanzada, Madrid, Spain.
| | - Marcos Ortega
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruńa (INIBIC), Universidade da Coruña, A Coruña, Spain.
| |
Collapse
|
13
|
Abstract
Topological and geometrical analysis of retinal blood vessels could be a cost-effective way to detect various common diseases. Automated vessel segmentation and vascular tree analysis models require powerful generalization capability in clinical applications. In this work, we constructed a novel benchmark RETA with 81 labelled vessel masks aiming to facilitate retinal vessel analysis. A semi-automated coarse-to-fine workflow was proposed for vessel annotation task. During database construction, we strived to control inter-annotator and intra-annotator variability by means of multi-stage annotation and label disambiguation on self-developed dedicated software. In addition to binary vessel masks, we obtained other types of annotations including artery/vein masks, vascular skeletons, bifurcations, trees and abnormalities. Subjective and objective quality validations of the annotated vessel masks demonstrated significantly improved quality over the existing open datasets. Our annotation software is also made publicly available serving the purpose of pixel-level vessel visualization. Researchers could develop vessel segmentation algorithms and evaluate segmentation performance using RETA. Moreover, it might promote the study of cross-modality tubular structure segmentation and analysis.
Collapse
|