1
|
Liu S, Liu F, Lin Z, Yin W, Fang S, Piao Y, Liu L, Shen Y. Arteries and veins in awake mice using two-photon microscopy. J Anat 2025; 246:798-811. [PMID: 39034848 PMCID: PMC11996710 DOI: 10.1111/joa.14110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 06/03/2024] [Accepted: 06/28/2024] [Indexed: 07/23/2024] Open
Abstract
Distinguishing arteries from veins in the cerebral cortex is critical for studying hemodynamics under pathophysiological conditions, which plays an important role in the diagnosis and treatment of various vessel-related diseases. However, due to the complexity of the cerebral vascular network, it is challenging to identify arteries and veins in vivo. Here, we demonstrate an artery-vein separation method that employs a combination of multiple scanning modes of two-photon microscopy and a custom-designed stereoscopic fixation device for mice. In this process, we propose a novel method for determining the line scanning direction, which allows us to determine the blood flow directions. The vasculature branches have been identified using an optimized z-stack scanning mode, followed by the separation of blood vessel types according to the directions of blood flow and branching patterns. Using this strategy, the penetrating arterioles and penetrating venules in awake mice could be accurately identified and the type of cerebral thrombus has been also successfully isolated without any empirical knowledge or algorithms. Our research presents a new, more accurate, and efficient method for cortical artery-vein separation in awake mice, providing a useful strategy for the application of two-photon microscopy in the study of cerebrovascular pathophysiology.
Collapse
Affiliation(s)
- Shuangshuang Liu
- Core Facilities, Zhejiang University School of Medicine, Hangzhou, China
| | - FangYue Liu
- School of Brain Science and Brain Medicine, Zhejiang University School of Medicine, Hangzhou, China
| | - Zhaoxiaonan Lin
- Core Facilities, Zhejiang University School of Medicine, Hangzhou, China
| | - Wei Yin
- Core Facilities, Zhejiang University School of Medicine, Hangzhou, China
| | - Sanhua Fang
- Core Facilities, Zhejiang University School of Medicine, Hangzhou, China
| | - Ying Piao
- College of Chemical and Biological Engineering, Zhejiang University, Hangzhou, China
| | - Li Liu
- Core Facilities, Zhejiang University School of Medicine, Hangzhou, China
| | - Yi Shen
- School of Brain Science and Brain Medicine, Zhejiang University School of Medicine, Hangzhou, China
- NHC and CAMS Key Laboratory of Medical Neurobiology, Zhejiang University, Hangzhou, China
- National Health and Disease Human Brain Tissue Resource Center, Hangzhou, China
| |
Collapse
|
2
|
Chen Q, Peng J, Zhao S, Liu W. Automatic artery/vein classification methods for retinal blood vessel: A review. Comput Med Imaging Graph 2024; 113:102355. [PMID: 38377630 DOI: 10.1016/j.compmedimag.2024.102355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 02/06/2024] [Accepted: 02/06/2024] [Indexed: 02/22/2024]
Abstract
Automatic retinal arteriovenous classification can assist ophthalmologists in disease early diagnosis. Deep learning-based methods and topological graph-based methods have become the main solutions for retinal arteriovenous classification in recent years. This paper reviews the automatic retinal arteriovenous classification methods from 2003 to 2022. Firstly, we compare different methods and provide comparison tables of the summary results. Secondly, we complete the classification of the public arteriovenous classification datasets and provide the annotation development tables of different datasets. Finally, we sort out the challenges of evaluation methods and provide a comprehensive evaluation system. Quantitative and qualitative analysis shows the changes in research hotspots over time, Quantitative and qualitative analyses reveal the evolution of research hotspots over time, highlighting the significance of exploring the integration of deep learning with topological information in future research.
Collapse
Affiliation(s)
- Qihan Chen
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
| | - Jianqing Peng
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China; Guangdong Provincial Key Laboratory of Fire Science and Technology, Guangzhou 510006, China.
| | - Shen Zhao
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China.
| | - Wanquan Liu
- School of Intelligent Systems Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China
| |
Collapse
|
3
|
Ma JP, Robbins CB, Pead E, McGrory S, Hamid C, Grewal DS, Scott BL, Trucco E, MacGillivray TJ, Fekrat S. Ultra-Widefield Imaging of the Retinal Macrovasculature in Parkinson Disease Versus Controls With Normal Cognition Using Alpha-Shapes Analysis. Transl Vis Sci Technol 2024; 13:15. [PMID: 38231496 PMCID: PMC10795547 DOI: 10.1167/tvst.13.1.15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 12/13/2023] [Indexed: 01/18/2024] Open
Abstract
Purpose To investigate retinal vascular characteristics using ultra-widefield (UWF) scanning laser ophthalmoscopy in Parkinson disease (PD). Methods Individuals with an expert-confirmed clinical diagnosis of PD and controls with normal cognition without PD underwent Optos California UWF imaging. Patients with diabetes, uncontrolled hypertension, glaucoma, dementia, other movement disorders, or known retinal or optic nerve pathology were excluded. Images were analyzed using Vasculature Assessment and Measurement Platform for Images of the Retina (VAMPIRE-UWF) software, which describes retinal vessel width gradient and tortuosity, provides vascular network fractal dimensions, and conducts alpha-shape analysis to further characterize vascular morphology (complexity, Opαmin; spread, OpA). Results In the PD cohort, 53 eyes of 38 subjects were assessed; in the control cohort, 51 eyes of 33 subjects were assessed. Eyes with PD had more tortuous retinal arteries in the superotemporal quadrant (P = 0.043). In eyes with PD, alpha-shape analysis revealed decreased OpA, indicating less retinal vasculature spread compared to controls (P = 0.032). Opαmin was decreased in PD (P = 0.044), suggesting increased vascular network complexity. No differences were observed in fractal dimension in any region of interest. Conclusions This pilot study suggests that retinal vasculature assessment on UWF images using alpha-shape analysis reveals differences in retinal vascular network spread and complexity in PD and may be a more sensitive metric compared to fractal dimension. Translational Relevance Retinal vasculature assessment using these novel methods may be useful in understanding ocular manifestations of PD and the development of retinal biomarkers.
Collapse
Affiliation(s)
- Justin P. Ma
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Cason B. Robbins
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Emma Pead
- VAMPIRE Project, Centre for Clinical Brain Science, University of Edinburgh, Edinburgh, UK
| | - Sarah McGrory
- VAMPIRE Project, Centre for Clinical Brain Science, University of Edinburgh, Edinburgh, UK
| | - Charlene Hamid
- VAMPIRE Project, Centre for Clinical Brain Science, University of Edinburgh, Edinburgh, UK
| | - Dilraj S. Grewal
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Burton L. Scott
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Neurology, Duke University School of Medicine, Durham, NC, USA
| | | | - Tom J. MacGillivray
- VAMPIRE Project, Centre for Clinical Brain Science, University of Edinburgh, Edinburgh, UK
| | - Sharon Fekrat
- iMIND Study Group, Duke University School of Medicine, Durham, NC, USA
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
- Department of Neurology, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
4
|
Suman S, Tiwari AK, Singh K. Computer-aided diagnostic system for hypertensive retinopathy: A review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107627. [PMID: 37320942 DOI: 10.1016/j.cmpb.2023.107627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 05/03/2023] [Accepted: 05/27/2023] [Indexed: 06/17/2023]
Abstract
Hypertensive Retinopathy (HR) is a retinal disease caused by elevated blood pressure for a prolonged period. There are no obvious signs in the early stages of high blood pressure, but it affects various body parts over time, including the eyes. HR is a biomarker for several illnesses, including retinal diseases, atherosclerosis, strokes, kidney disease, and cardiovascular risks. Early microcirculation abnormalities in chronic diseases can be diagnosed through retinal examination prior to the onset of major clinical consequences. Computer-aided diagnosis (CAD) plays a vital role in the early identification of HR with improved diagnostic accuracy, which is time-efficient and demands fewer resources. Recently, numerous studies have been reported on the automatic identification of HR. This paper provides a comprehensive review of the automated tasks of Artery-Vein (A/V) classification, Arteriovenous ratio (AVR) computation, HR detection (Binary classification), and HR severity grading. The review is conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol. The paper discusses the clinical features of HR, the availability of datasets, existing methods used for A/V classification, AVR computation, HR detection, and severity grading, and performance evaluation metrics. The reviewed articles are summarized with classifiers details, adoption of different kinds of methodologies, performance comparisons, datasets details, their pros and cons, and computational platform. For each task, a summary and critical in-depth analysis are provided, as well as common research issues and challenges in the existing studies. Finally, the paper proposes future research directions to overcome challenges associated with data set availability, HR detection, and severity grading.
Collapse
Affiliation(s)
- Supriya Suman
- Interdisciplinary Research Platform (IDRP): Smart Healthcare, Indian Institute of Technology, N.H. 62, Nagaur Road, Karwar, Jodhpur, Rajasthan 342030, India.
| | - Anil Kumar Tiwari
- Department of Electrical Engineering, Indian Institute of Technology, N.H. 62, Nagaur Road, Karwar, Jodhpur, Rajasthan 342030, India
| | - Kuldeep Singh
- Department of Pediatrics, All India Institute of Medical Sciences, Basni Industrial Area Phase-2, Jodhpur, Rajasthan 342005, India
| |
Collapse
|
5
|
Kv R, Prasad K, Peralam Yegneswaran P. Segmentation and Classification Approaches of Clinically Relevant Curvilinear Structures: A Review. J Med Syst 2023; 47:40. [PMID: 36971852 PMCID: PMC10042761 DOI: 10.1007/s10916-023-01927-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 02/25/2023] [Indexed: 03/29/2023]
Abstract
Detection of curvilinear structures from microscopic images, which help the clinicians to make an unambiguous diagnosis is assuming paramount importance in recent clinical practice. Appearance and size of dermatophytic hyphae, keratitic fungi, corneal and retinal vessels vary widely making their automated detection cumbersome. Automated deep learning methods, endowed with superior self-learning capacity, have superseded the traditional machine learning methods, especially in complex images with challenging background. Automatic feature learning ability using large input data with better generalization and recognition capability, but devoid of human interference and excessive pre-processing, is highly beneficial in the above context. Varied attempts have been made by researchers to overcome challenges such as thin vessels, bifurcations and obstructive lesions in retinal vessel detection as revealed through several publications reviewed here. Revelations of diabetic neuropathic complications such as tortuosity, changes in the density and angles of the corneal fibers have been successfully sorted in many publications reviewed here. Since artifacts complicate the images and affect the quality of analysis, methods addressing these challenges have been described. Traditional and deep learning methods, that have been adapted and published between 2015 and 2021 covering retinal vessels, corneal nerves and filamentous fungi have been summarized in this review. We find several novel and meritorious ideas and techniques being put to use in the case of retinal vessel segmentation and classification, which by way of cross-domain adaptation can be utilized in the case of corneal and filamentous fungi also, making suitable adaptations to the challenges to be addressed.
Collapse
Affiliation(s)
- Rajitha Kv
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India.
| | - Prakash Peralam Yegneswaran
- Department of Microbiology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, 576104, Karnataka, India
| |
Collapse
|
6
|
Pead E, Thompson AC, Grewal DS, McGrory S, Robbins CB, Ma JP, Johnson KG, Liu AJ, Hamid C, Trucco E, Ritchie CW, Muniz G, Lengyel I, Dhillon B, Fekrat S, MacGillivray T. Retinal Vascular Changes in Alzheimer's Dementia and Mild Cognitive Impairment: A Pilot Study Using Ultra-Widefield Imaging. Transl Vis Sci Technol 2023; 12:13. [PMID: 36622689 PMCID: PMC9838583 DOI: 10.1167/tvst.12.1.13] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
Purpose Retinal microvascular abnormalities measured on retinal images are a potential source of prognostic biomarkers of vascular changes in the neurodegenerating brain. We assessed the presence of these abnormalities in Alzheimer's dementia and mild cognitive impairment (MCI) using ultra-widefield (UWF) retinal imaging. Methods UWF images from 103 participants (28 with Alzheimer's dementia, 30 with MCI, and 45 with normal cognition) underwent analysis to quantify measures of retinal vascular branching complexity, width, and tortuosity. Results Participants with Alzheimer's dementia displayed increased vessel branching in the midperipheral retina and increased arteriolar thinning. Participants with MCI displayed increased rates of arteriolar and venular thinning and a trend for decreased vessel branching. Conclusions Statistically significant differences in the retinal vasculature in peripheral regions of the retina were observed among the distinct cognitive stages. However, larger studies are required to establish the clinical importance of our findings. UWF imaging may be a promising modality to assess a larger view of the retinal vasculature to uncover retinal changes in Alzheimer's disease. Translational Relevance This pilot work reports an investigation into which retinal vasculature measurements may be useful surrogate measures of cognitive decline, as well as technical developments (e.g., measurement standardization), that are first required to establish their recommended use and translational potential.
Collapse
Affiliation(s)
- Emma Pead
- VAMPIRE Project, Centre for Clinical Brain Sciences, The University of Edinburgh, Edinburgh, UK
| | - Atalie C. Thompson
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Dilraj S. Grewal
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Sarah McGrory
- VAMPIRE Project, Centre for Clinical Brain Sciences, The University of Edinburgh, Edinburgh, UK
| | - Cason B. Robbins
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Justin P. Ma
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Kim G. Johnson
- Department of Neurology, Duke University School of Medicine, Durham, NC, USA
| | - Andy J. Liu
- Department of Neurology, Duke University School of Medicine, Durham, NC, USA
| | - Charlene Hamid
- Edinburgh Clinical Research Facility, The University of Edinburgh, Edinburgh, UK
| | - Emanuele Trucco
- VAMPIRE Project, Computer Vision and Image Processing, Computing (SSE), The University of Dundee, Dundee, UK
| | - Craig W. Ritchie
- Edinburgh Dementia Prevention, The University of Edinburgh, Edinburgh, UK
| | - Graciela Muniz
- Department of Social Medicine, Ohio University, Athens, OH, USA
| | - Imre Lengyel
- The Welcome-Wolfson Institute for Experimental Medicine, School of Medicine Dentistry and Biomedical Science, Queen's University Belfast, Belfast, UK
| | - Baljean Dhillon
- VAMPIRE Project, Centre for Clinical Brain Sciences, The University of Edinburgh, Edinburgh, UK,Princess Alexandra Eye Pavilion, NHS Lothian, Edinburgh, UK
| | - Sharon Fekrat
- Department of Ophthalmology, Duke University School of Medicine, Durham, NC, USA
| | - Tom MacGillivray
- VAMPIRE Project, Centre for Clinical Brain Sciences, The University of Edinburgh, Edinburgh, UK
| |
Collapse
|
7
|
Wu YS, Hung SK. Origami Inspired Laser Scanner. MICROMACHINES 2022; 13:1796. [PMID: 36296149 PMCID: PMC9611993 DOI: 10.3390/mi13101796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 10/14/2022] [Accepted: 10/18/2022] [Indexed: 06/16/2023]
Abstract
Diverse origami techniques and various selections of paper open new possibilities to create micromachines. By folding paper, this article proposes an original approach to build laser scanners, which manipulate optical beams precisely and realize valuable applications, including laser marking, cutting, engraving, and displaying. A prototype has been designed, implemented, actuated, and controlled. The experimental results demonstrate that the angular stroke, repeatability, full scale settling time, and resonant frequency are 20°, 0.849 m°, 330 ms, 68 Hz, respectively. Its durability, more than 35 million cycles, shows the potential to carry out serious tasks.
Collapse
|
8
|
Doney ASF, Nar A, Huang Y, Trucco E, MacGillivray T, Connelly P, Leese GP, McKay GJ. Retinal vascular measures from diabetes retinal screening photographs and risk of incident dementia in type 2 diabetes: A GoDARTS study. Front Digit Health 2022; 4:945276. [PMID: 36120710 PMCID: PMC9470757 DOI: 10.3389/fdgth.2022.945276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 08/10/2022] [Indexed: 11/13/2022] Open
Abstract
Objective Patients with diabetes have an increased risk of dementia. Improved prediction of dementia is an important goal in developing future prevention strategies. Diabetic retinopathy screening (DRS) photographs may be a convenient source of imaging biomarkers of brain health. We therefore investigated the association of retinal vascular measures (RVMs) from DRS photographs in patients with type 2 diabetes with dementia risk. Research Design and Methods RVMs were obtained from 6,111 patients in the GoDARTS bioresource (635 incident cases) using VAMPIRE software. Their association, independent of Apo E4 genotype and clinical parameters, was determined for incident all cause dementia (ACD) and separately Alzheimer's disease (AD) and vascular dementia (VD). We used Cox's proportional hazards with competing risk of death without dementia. The potential value of RVMs to increase the accuracy of risk prediction was evaluated. Results Increased retinal arteriolar fractal dimension associated with increased risk of ACD (csHR 1.17; 1.08-1.26) and AD (HR 1.33; 1.16-1.52), whereas increased venular fractal dimension (FDV) was associated with reduced risk of AD (csHR 0.85; 0.74-0.96). Conversely, FDV was associated with increased risk of VD (csHR 1.22; 1.07-1.40). Wider arteriolar calibre was associated with a reduced risk of ACD (csHR 0.9; 0.83-0.98) and wider venular calibre was associated with a reduced risk of AD (csHR 0.87; 0.78-0.97). Accounting for competing risk did not substantially alter these findings. RVMs significantly increased the accuracy of prediction. Conclusions Conventional DRS photographs could enhance stratifying patients with diabetes at increased risk of dementia facilitating the development of future prevention strategies.
Collapse
Affiliation(s)
| | - Aditya Nar
- Population Health and Genomics, University of Dundee, Dundee, United Kingdom
| | - Yu Huang
- Population Health and Genomics, University of Dundee, Dundee, United Kingdom
| | - Emanuele Trucco
- VAMPIRE project, Computing, School of Science and Engineering, University of Dundee, Dundee, United Kingdom
| | - Tom MacGillivray
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Peter Connelly
- NHS Tayside; NHS Research Scotland Neuroprogressive Disorders and Dementia Research Network, Ninewells Hospital Dundee; University of Dundee, Dundee, Scotland
| | - Graham P. Leese
- Population Health and Genomics, University of Dundee, Dundee, United Kingdom
| | - Gareth J. McKay
- Centre for Public Health, Queen’s University Belfast, Belfast, NIR, United Kingdom
| | | |
Collapse
|
9
|
Binh NT, Hien NM, Tin DT. Improving U-Net architecture and graph cuts optimization to classify arterioles and venules in retina fundus images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-212259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The central retinal artery and its branches supply blood to the inner retina. Vascular manifestations in the retina indirectly reflect the vascular changes and damage in organs such as the heart, kidneys, and brain because of the similar vascular structure of these organs. The diabetic retinopathy and risk of stroke are caused by increased venular caliber. The degrees of these diseases depend on the changes of arterioles and venules. The ratio between the calibers of arterioles and venules (AVR) is various. AVR is considered as the useful diagnostic indicator of different associated health problems. However, the task is not easy because of the lack of information of the features being used to classify the retinal vessels as arterioles and venules. This paper proposed a method to classify the retinal vessels into the arterioles and venules based on improving U-Net architecture and graph cuts. The accuracy of the proposed method is about 97.6%. The results of the proposed method are better than the other methods in RITE dataset and AVRDB dataset.
Collapse
Affiliation(s)
- Nguyen Thanh Binh
- Department of Information Systems, Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Ho Chi Minh City, Vietnam
- Vietnam National University Ho Chi Minh City, Linh Trung Ward, Thu Duc District, Ho Chi Minh City, Vietnam
| | - Nguyen Mong Hien
- Department of Information Systems, Faculty of Computer Science and Engineering, Ho Chi Minh City University of Technology (HCMUT), Ho Chi Minh City, Vietnam
- Tra Vinh University, Vietnam
| | - Dang Thanh Tin
- Vietnam National University Ho Chi Minh City, Linh Trung Ward, Thu Duc District, Ho Chi Minh City, Vietnam
- Information Systems Engineering Laboratory, Faculty of Electrical and Electronics Engineering, Ho Chi Minh City University of Technology (HCMUT), Ho Chi Minh City, Vietnam
| |
Collapse
|
10
|
Hu J, Wang H, Cao Z, Wu G, Jonas JB, Wang YX, Zhang J. Automatic Artery/Vein Classification Using a Vessel-Constraint Network for Multicenter Fundus Images. Front Cell Dev Biol 2021; 9:659941. [PMID: 34178986 PMCID: PMC8226261 DOI: 10.3389/fcell.2021.659941] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 04/19/2021] [Indexed: 11/24/2022] Open
Abstract
Retinal blood vessel morphological abnormalities are generally associated with cardiovascular, cerebrovascular, and systemic diseases, automatic artery/vein (A/V) classification is particularly important for medical image analysis and clinical decision making. However, the current method still has some limitations in A/V classification, especially the blood vessel edge and end error problems caused by the single scale and the blurred boundary of the A/V. To alleviate these problems, in this work, we propose a vessel-constraint network (VC-Net) that utilizes the information of vessel distribution and edge to enhance A/V classification, which is a high-precision A/V classification model based on data fusion. Particularly, the VC-Net introduces a vessel-constraint (VC) module that combines local and global vessel information to generate a weight map to constrain the A/V features, which suppresses the background-prone features and enhances the edge and end features of blood vessels. In addition, the VC-Net employs a multiscale feature (MSF) module to extract blood vessel information with different scales to improve the feature extraction capability and robustness of the model. And the VC-Net can get vessel segmentation results simultaneously. The proposed method is tested on publicly available fundus image datasets with different scales, namely, DRIVE, LES, and HRF, and validated on two newly created multicenter datasets: Tongren and Kailuan. We achieve a balance accuracy of 0.9554 and F1 scores of 0.7616 and 0.7971 for the arteries and veins, respectively, on the DRIVE dataset. The experimental results prove that the proposed model achieves competitive performance in A/V classification and vessel segmentation tasks compared with state-of-the-art methods. Finally, we test the Kailuan dataset with other trained fusion datasets, the results also show good robustness. To promote research in this area, the Tongren dataset and source code will be made publicly available. The dataset and code will be made available at https://github.com/huawang123/VC-Net.
Collapse
Affiliation(s)
- Jingfei Hu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Hua Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Zhaohui Cao
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Guang Wu
- Hefei Innovation Research Institute, Beihang University, Hefei, China
| | - Jost B Jonas
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China.,Department of Ophthalmology, Medical Faculty Mannheim of the Ruprecht-Karls-University Heidelberg, Mannheim, Germany
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China.,Hefei Innovation Research Institute, Beihang University, Hefei, China.,Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China.,School of Biomedical Engineering, Anhui Medical University, Hefei, China.,Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| |
Collapse
|
11
|
Xie H, Zeng X, Lei H, Du J, Wang J, Zhang G, Cao J, Wang T, Lei B. Cross-attention multi-branch network for fundus diseases classification using SLO images. Med Image Anal 2021; 71:102031. [PMID: 33798993 DOI: 10.1016/j.media.2021.102031] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 01/24/2021] [Accepted: 03/03/2021] [Indexed: 12/23/2022]
Abstract
Fundus diseases classification is vital for the health of human beings. However, most of existing methods detect diseases by means of single angle fundus images, which lead to the lack of pathological information. To address this limitation, this paper proposes a novel deep learning method to complete different fundus diseases classification tasks using ultra-wide field scanning laser ophthalmoscopy (SLO) images, which have an ultra-wide field view of 180-200˚. The proposed deep model consists of multi-branch network, atrous spatial pyramid pooling module (ASPP), cross-attention and depth-wise attention module. Specifically, the multi-branch network employs the ResNet-34 model as the backbone to extract feature information, where the ResNet-34 model with two-branch is followed by the ASPP module to extract multi-scale spatial contextual features by setting different dilated rates. The depth-wise attention module can provide the global attention map from the multi-branch network, which enables the network to focus on the salient targets of interest. The cross-attention module adopts the cross-fusion mode to fuse the channel and spatial attention maps from the ResNet-34 model with two-branch, which can enhance the representation ability of the disease-specific features. The extensive experiments on our collected SLO images and two publicly available datasets demonstrate that the proposed method can outperform the state-of-the-art methods and achieve quite promising classification performance of the fundus diseases.
Collapse
Affiliation(s)
- Hai Xie
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Xianlu Zeng
- Shenzhen Eye Hospital, Shenzhen Key Ophthalmic Laboratory, Health Science Center, Shenzhen University, The Second Affiliated Hospital of Jinan University, Shenzhen, China
| | - Haijun Lei
- Guangdong Province Key Laboratory of Popular High-performance Computers, School of Computer and Software Engineering, Shenzhen University, Shenzhen, China
| | - Jie Du
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Jiantao Wang
- Shenzhen Eye Hospital, Shenzhen Key Ophthalmic Laboratory, Health Science Center, Shenzhen University, The Second Affiliated Hospital of Jinan University, Shenzhen, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Shenzhen Key Ophthalmic Laboratory, Health Science Center, Shenzhen University, The Second Affiliated Hospital of Jinan University, Shenzhen, China.
| | - Jiuwen Cao
- Key Lab for IOT and Information Fusion Technology of Zhejiang, Artificial Intelligence Institute, Hangzhou Dianzi University, Hangzhou, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.
| |
Collapse
|
12
|
Mookiah MRK, Hogg S, MacGillivray TJ, Prathiba V, Pradeepa R, Mohan V, Anjana RM, Doney AS, Palmer CNA, Trucco E. A review of machine learning methods for retinal blood vessel segmentation and artery/vein classification. Med Image Anal 2020; 68:101905. [PMID: 33385700 DOI: 10.1016/j.media.2020.101905] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 11/10/2020] [Accepted: 11/11/2020] [Indexed: 12/20/2022]
Abstract
The eye affords a unique opportunity to inspect a rich part of the human microvasculature non-invasively via retinal imaging. Retinal blood vessel segmentation and classification are prime steps for the diagnosis and risk assessment of microvascular and systemic diseases. A high volume of techniques based on deep learning have been published in recent years. In this context, we review 158 papers published between 2012 and 2020, focussing on methods based on machine and deep learning (DL) for automatic vessel segmentation and classification for fundus camera images. We divide the methods into various classes by task (segmentation or artery-vein classification), technique (supervised or unsupervised, deep and non-deep learning, hand-crafted methods) and more specific algorithms (e.g. multiscale, morphology). We discuss advantages and limitations, and include tables summarising results at-a-glance. Finally, we attempt to assess the quantitative merit of DL methods in terms of accuracy improvement compared to other methods. The results allow us to offer our views on the outlook for vessel segmentation and classification for fundus camera images.
Collapse
Affiliation(s)
| | - Stephen Hogg
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| | - Tom J MacGillivray
- VAMPIRE project, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh EH16 4SB, UK
| | - Vijayaraghavan Prathiba
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Rajendra Pradeepa
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Viswanathan Mohan
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Ranjit Mohan Anjana
- Madras Diabetes Research Foundation and Dr. Mohan's Diabetes Specialities Centre, Gopalapuram, Chennai 600086, India
| | - Alexander S Doney
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Colin N A Palmer
- Division of Population Health and Genomics, Ninewells Hospital and Medical School, University of Dundee, Dundee, DD1 9SY, UK
| | - Emanuele Trucco
- VAMPIRE project, Computing (SSEN), University of Dundee, Dundee DD1 4HN, UK
| |
Collapse
|
13
|
Kang H, Gao Y, Guo S, Xu X, Li T, Wang K. AVNet: A retinal artery/vein classification network with category-attention weighted fusion. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 195:105629. [PMID: 32634648 DOI: 10.1016/j.cmpb.2020.105629] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Accepted: 06/21/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic artery/vein (A/V) classification in retinal images is of great importance in detecting vascular abnormalities, which may provide biomarkers for early diagnosis of many systemic diseases. It is intuitive to apply popular deep semantic segmentation network for A/V classification. However, the model is required to provide powerful representation ability since vessel is much more complex than general objects. Moreover, deep network may lead to inconsistent classification results for the same vessel due to the lack of structured optimization objective. METHODS In this paper, we propose a novel segmentation network named AVNet, which effectively enhances the classification ability of the model by integrating category-attention weighted fusion (CWF) module, significantly improving the pixel-level A/V classification results. Then, a graph based vascular structure reconstruction (VSR) algorithm is employed to reduce the segment-wise inconsistency, verifying the effect of the graph model on noisy vessel segmentation results. RESULTS The proposed method has been verified on three datasets, i.e. DRIVE, LES-AV and WIDE. AVNet achieves pixel-level accuracies of 90.62%, 90.34%, and 93.16%, respectively, and VSR further improves the performance by 0.19%, 1.85% and 0.64%, achieving the state-of-the-art results on these three datasets. CONCLUSION The proposed method achieves competitive performance in A/V classification task.
Collapse
Affiliation(s)
- Hong Kang
- College of Computer Science, Nankai University, Tianjin, China; Beijing Shanggong Medical Technology Co. Ltd., China
| | - Yingqi Gao
- College of Computer Science, Nankai University, Tianjin, China
| | - Song Guo
- College of Computer Science, Nankai University, Tianjin, China
| | - Xia Xu
- College of Computer Science, Nankai University, Tianjin, China
| | - Tao Li
- College of Computer Science, Nankai University, Tianjin, China; State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Science, Beijing 100190, China
| | - Kai Wang
- College of Computer Science, Nankai University, Tianjin, China; Key Laboratory for Medical Data Analysis and Statistical Research of Tianjin, China.
| |
Collapse
|
14
|
Xie H, Lei H, Zeng X, He Y, Chen G, Elazab A, Yue G, Wang J, Zhang G, Lei B. AMD-GAN: Attention encoder and multi-branch structure based generative adversarial networks for fundus disease detection from scanning laser ophthalmoscopy images. Neural Netw 2020; 132:477-490. [PMID: 33039786 DOI: 10.1016/j.neunet.2020.09.005] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Revised: 08/03/2020] [Accepted: 09/06/2020] [Indexed: 12/23/2022]
Abstract
The scanning laser ophthalmoscopy (SLO) has become an important tool for the determination of peripheral retinal pathology, in recent years. However, the collected SLO images are easily interfered by the eyelash and frame of the devices, which heavily affect the key feature extraction of the images. To address this, we propose a generative adversarial network called AMD-GAN based on the attention encoder (AE) and multi-branch (MB) structure for fundus disease detection from SLO images. Specifically, the designed generator consists of two parts: the AE and generation flow network, where the real SLO images are encoded by the AE module to extract features and the generation flow network to handle the random Gaussian noise by a series of residual block with up-sampling (RU) operations to generate fake images with the same size as the real ones, where the AE is also used to mine features for generator. For discriminator, a ResNet network using MB is devised by copying the stage 3 and stage 4 structures of the ResNet-34 model to extract deep features. Furthermore, the depth-wise asymmetric dilated convolution is leveraged to extract local high-level contextual features and accelerate the training process. Besides, the last layer of discriminator is modified to build the classifier to detect the diseased and normal SLO images. In addition, the prior knowledge of experts is utilized to improve the detection results. Experimental results on the two local SLO datasets demonstrate that our proposed method is promising in detecting the diseased and normal SLO images with the experts labeling.
Collapse
Affiliation(s)
- Hai Xie
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Haijun Lei
- School of Computer and Software Engineering, Shenzhen University, Guangdong Province Key Laboratory of Popular High-performance Computers, Shenzhen, China
| | - Xianlu Zeng
- Shenzhen Eye Hospital; Shenzhen Key Ophthalmic Laboratory, Health Science Center, Shenzhen University, The Second Affiliated Hospital of Jinan University, Shenzhen, China
| | - Yejun He
- College of Electronics and Information Engineering, Shenzhen University, China; Guangdong Engineering Research Center of Base Station Antennas and Propagation, Shenzhen Key Lab of Antennas and Propagation, Shenzhen, China
| | - Guozhen Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Ahmed Elazab
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Computer Science Department, Misr Higher Institute for Commerce and Computers, Mansoura, Egypt
| | - Guanghui Yue
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Jiantao Wang
- Shenzhen Eye Hospital; Shenzhen Key Ophthalmic Laboratory, Health Science Center, Shenzhen University, The Second Affiliated Hospital of Jinan University, Shenzhen, China
| | - Guoming Zhang
- Shenzhen Eye Hospital; Shenzhen Key Ophthalmic Laboratory, Health Science Center, Shenzhen University, The Second Affiliated Hospital of Jinan University, Shenzhen, China.
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.
| |
Collapse
|
15
|
Wang Z, Jiang X, Liu J, Cheng KT, Yang X. Multi-Task Siamese Network for Retinal Artery/Vein Separation via Deep Convolution Along Vessel. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2904-2919. [PMID: 32167888 DOI: 10.1109/tmi.2020.2980117] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Vascular tree disentanglement and vessel type classification are two crucial steps of the graph-based method for retinal artery-vein (A/V) separation. Existing approaches treat them as two independent tasks and mostly rely on ad hoc rules (e.g. change of vessel directions) and hand-crafted features (e.g. color, thickness) to handle them respectively. However, we argue that the two tasks are highly correlated and should be handled jointly since knowing the A/V type can unravel those highly entangled vascular trees, which in turn helps to infer the types of connected vessels that are hard to classify based on only appearance. Therefore, designing features and models isolatedly for the two tasks often leads to a suboptimal solution of A/V separation. In view of this, this paper proposes a multi-task siamese network which aims to learn the two tasks jointly and thus yields more robust deep features for accurate A/V separation. Specifically, we first introduce Convolution Along Vessel (CAV) to extract the visual features by convolving a fundus image along vessel segments, and the geometric features by tracking the directions of blood flow in vessels. The siamese network is then trained to learn multiple tasks: i) classifying A/V types of vessel segments using visual features only, and ii) estimating the similarity of every two connected segments by comparing their visual and geometric features in order to disentangle the vasculature into individual vessel trees. Finally, the results of two tasks mutually correct each other to accomplish final A/V separation. Experimental results demonstrate that our method can achieve accuracy values of 94.7%, 96.9%, and 94.5% on three major databases (DRIVE, INSPIRE, WIDE) respectively, which outperforms recent state-of-the-arts.
Collapse
|
16
|
Yang J, Dong X, Hu Y, Peng Q, Tao G, Ou Y, Cai H, Yang X. Fully Automatic Arteriovenous Segmentation in Retinal Images via Topology-Aware Generative Adversarial Networks. Interdiscip Sci 2020; 12:323-334. [DOI: 10.1007/s12539-020-00385-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 06/16/2020] [Accepted: 07/08/2020] [Indexed: 10/23/2022]
|
17
|
Rajan SP. Recognition of Cardiovascular Diseases through Retinal Images Using Optic Cup to Optic Disc Ratio. PATTERN RECOGNITION AND IMAGE ANALYSIS 2020. [DOI: 10.1134/s105466182002011x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
18
|
Zhao Y, Xie J, Zhang H, Zheng Y, Zhao Y, Qi H, Zhao Y, Su P, Liu J, Liu Y. Retinal Vascular Network Topology Reconstruction and Artery/Vein Classification via Dominant Set Clustering. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:341-356. [PMID: 31283498 DOI: 10.1109/tmi.2019.2926492] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The estimation of vascular network topology in complex networks is important in understanding the relationship between vascular changes and a wide spectrum of diseases. Automatic classification of the retinal vascular trees into arteries and veins is of direct assistance to the ophthalmologist in terms of diagnosis and treatment of eye disease. However, it is challenging due to their projective ambiguity and subtle changes in appearance, contrast, and geometry in the imaging process. In this paper, we propose a novel method that is capable of making the artery/vein (A/V) distinction in retinal color fundus images based on vascular network topological properties. To this end, we adapt the concept of dominant set clustering and formalize the retinal blood vessel topology estimation and the A/V classification as a pairwise clustering problem. The graph is constructed through image segmentation, skeletonization, and identification of significant nodes. The edge weight is defined as the inverse Euclidean distance between its two end points in the feature space of intensity, orientation, curvature, diameter, and entropy. The reconstructed vascular network is classified into arteries and veins based on their intensity and morphology. The proposed approach has been applied to five public databases, namely INSPIRE, IOSTAR, VICAVR, DRIVE, and WIDE, and achieved high accuracies of 95.1%, 94.2%, 93.8%, 91.1%, and 91.0%, respectively. Furthermore, we have made manual annotations of the blood vessel topologies for INSPIRE, IOSTAR, VICAVR, and DRIVE datasets, and these annotations are released for public access so as to facilitate researchers in the community.
Collapse
|
19
|
Srinidhi CL, P A, Rajan J. Automated Method for Retinal Artery/Vein Separation via Graph Search Metaheuristic Approach. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:2705-2718. [PMID: 30605099 DOI: 10.1109/tip.2018.2889534] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Separation of the vascular tree into arteries and veins is a fundamental prerequisite in the automatic diagnosis of retinal biomarkers associated with systemic and neurodegenerative diseases. In this paper, we present a novel graph search metaheuristic approach for automatic separation of arteries/veins (A/V) from color fundus images. Our method exploits local information to disentangle the complex vascular tree into multiple subtrees, and global information to label these vessel subtrees into arteries and veins. Given a binary vessel map, a graph representation of the vascular network is constructed representing the topological and spatial connectivity of the vascular structures. Based on the anatomical uniqueness at vessel crossing and branching points, the vascular tree is split into multiple subtrees containing arteries and veins. Finally, the identified vessel subtrees are labeled with A/V based on a set of handcrafted features trained with random forest classifier. The proposed method has been tested on four different publicly available retinal datasets with an average accuracy of 94.7%, 93.2%, 96.8% and 90.2% across AV-DRIVE, CT-DRIVE. INSPIRE-AVR and WIDE datasets, respectively. These results demonstrate the superiority of our proposed approach in outperforming state-ofthe- art methods for A/V separation.
Collapse
|