1
|
Li Z, Huang G, Zou B, Chen W, Zhang T, Xu Z, Cai K, Wang T, Sun Y, Wang Y, Jin K, Huang X. Segmentation of Low-Light Optical Coherence Tomography Angiography Images under the Constraints of Vascular Network Topology. SENSORS (BASEL, SWITZERLAND) 2024; 24:774. [PMID: 38339491 PMCID: PMC10856982 DOI: 10.3390/s24030774] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 12/27/2023] [Accepted: 01/03/2024] [Indexed: 02/12/2024]
Abstract
Optical coherence tomography angiography (OCTA) offers critical insights into the retinal vascular system, yet its full potential is hindered by challenges in precise image segmentation. Current methodologies struggle with imaging artifacts and clarity issues, particularly under low-light conditions and when using various high-speed CMOS sensors. These challenges are particularly pronounced when diagnosing and classifying diseases such as branch vein occlusion (BVO). To address these issues, we have developed a novel network based on topological structure generation, which transitions from superficial to deep retinal layers to enhance OCTA segmentation accuracy. Our approach not only demonstrates improved performance through qualitative visual comparisons and quantitative metric analyses but also effectively mitigates artifacts caused by low-light OCTA, resulting in reduced noise and enhanced clarity of the images. Furthermore, our system introduces a structured methodology for classifying BVO diseases, bridging a critical gap in this field. The primary aim of these advancements is to elevate the quality of OCTA images and bolster the reliability of their segmentation. Initial evaluations suggest that our method holds promise for establishing robust, fine-grained standards in OCTA vascular segmentation and analysis.
Collapse
Affiliation(s)
- Zhi Li
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China; (Z.L.); (G.H.); (B.Z.); (W.C.); (T.Z.); (T.W.); (Y.S.)
| | - Gaopeng Huang
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China; (Z.L.); (G.H.); (B.Z.); (W.C.); (T.Z.); (T.W.); (Y.S.)
| | - Binfeng Zou
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China; (Z.L.); (G.H.); (B.Z.); (W.C.); (T.Z.); (T.W.); (Y.S.)
| | - Wenhao Chen
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China; (Z.L.); (G.H.); (B.Z.); (W.C.); (T.Z.); (T.W.); (Y.S.)
| | - Tianyun Zhang
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China; (Z.L.); (G.H.); (B.Z.); (W.C.); (T.Z.); (T.W.); (Y.S.)
| | - Zhaoyang Xu
- Department of Paediatrics, University of Cambridge, Cambridge CB2 1TN, UK;
| | - Kunyan Cai
- Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR 999078, China;
| | - Tingyu Wang
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China; (Z.L.); (G.H.); (B.Z.); (W.C.); (T.Z.); (T.W.); (Y.S.)
| | - Yaoqi Sun
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China; (Z.L.); (G.H.); (B.Z.); (W.C.); (T.Z.); (T.W.); (Y.S.)
- Lishui Institute, Hangzhou Dianzi University, Lishui 323000, China
| | - Yaqi Wang
- College of Media Engineering, Communication University of Zhejiang, Hangzhou 310018, China;
| | - Kai Jin
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou 310027, China;
| | - Xingru Huang
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China; (Z.L.); (G.H.); (B.Z.); (W.C.); (T.Z.); (T.W.); (Y.S.)
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London E3 4BL, UK
| |
Collapse
|
2
|
Dhirachaikulpanich D, Madhusudhan S, Parry D, Babiker S, Zheng Y, Beare NA. RETINAL VASCULITIS SEVERITY ASSESSMENT: Intraobserver and Interobserver Reliability of a New Scheme for Grading Wide-Field Fluorescein Angiograms in Retinal Vasculitis. Retina 2023; 43:1534-1543. [PMID: 37229721 PMCID: PMC10442128 DOI: 10.1097/iae.0000000000003838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
PURPOSE Wide-field fluorescein angiography is commonly used to assess retinal vasculitis (RV), which manifests as vascular leakage and occlusion. Currently, there is no standard grading scheme for RV severity. The authors propose a novel RV grading scheme and assess its reliability and reproducibility. METHODS A grading scheme was developed to assess both leakage and occlusion in RV. Wide-field fluorescein angiography images from 50 patients with RV were graded by four graders, and one grader graded them twice. An intraclass correlation coefficient (ICC) was used to determine intraobserver-interobserver reliability. Generalized linear models were calculated to associate the scoring with visual acuity. RESULTS Repeated grading by the same grader showed good intraobserver reliability for both leakage (ICC = 0.85, 95% CI 0.78-0.89) and occlusion (ICC = 0.82, 95% CI 0.75-0.88) scores. Interobserver reliability among four independent graders showed good agreement for both leakage (ICC = 0.66, 95% CI 0.49-0.77) and occlusion (ICC = 0.75, 95% CI 0.68-0.81) scores. An increasing leakage score was significantly associated with worse concurrent visual acuity (generalized linear models, β = 0.090, P < 0.01) and at 1-year follow-up (generalized linear models, β = 0.063, P < 0.01). CONCLUSION The proposed grading scheme for RV has good to excellent intraobserver and interobserver reliability across a range of graders. The leakage score related to present and future visual acuity.
Collapse
Affiliation(s)
- Dhanach Dhirachaikulpanich
- Department of Eye & Vision Sciences, University of Liverpool, Liverpool, United Kingdom
- Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok, Thailand
- St Paul's Eye Unit, Liverpool University Hospitals NHS Foundation Trust, Liverpool, United Kingdom; and
| | - Savita Madhusudhan
- Department of Eye & Vision Sciences, University of Liverpool, Liverpool, United Kingdom
- St Paul's Eye Unit, Liverpool University Hospitals NHS Foundation Trust, Liverpool, United Kingdom; and
| | - David Parry
- St Paul's Eye Unit, Liverpool University Hospitals NHS Foundation Trust, Liverpool, United Kingdom; and
| | - Salma Babiker
- St Paul's Eye Unit, Liverpool University Hospitals NHS Foundation Trust, Liverpool, United Kingdom; and
| | - Yalin Zheng
- Department of Eye & Vision Sciences, University of Liverpool, Liverpool, United Kingdom
- St Paul's Eye Unit, Liverpool University Hospitals NHS Foundation Trust, Liverpool, United Kingdom; and
- Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart & Chest Hospital, Liverpool, United Kingdom
| | - Nicholas A.V. Beare
- Department of Eye & Vision Sciences, University of Liverpool, Liverpool, United Kingdom
- St Paul's Eye Unit, Liverpool University Hospitals NHS Foundation Trust, Liverpool, United Kingdom; and
| |
Collapse
|
3
|
Soni M, Singh NK, Das P, Shabaz M, Shukla PK, Sarkar P, Singh S, Keshta I, Rizwan A. IoT-Based Federated Learning Model for Hypertensive Retinopathy Lesions Classification. IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS 2023; 10:1722-1731. [DOI: 10.1109/tcss.2022.3213507] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/09/2024]
Affiliation(s)
- Mukesh Soni
- Department of CSE, University Centre for Research and Development, Chandigarh University, Mohali, Punjab, India
| | - Nikhil Kumar Singh
- Department of Computer Science and Engineering, Maulana Azad National Institute of Technology, Bhopal, Madhya Pradesh, India
| | - Pranjit Das
- Department of CSE, Koneru Lakshmaiah Education Foundation (K L University), Vaddeswaram, India
| | - Mohammad Shabaz
- Model Institute of Engineering and Technology, Jammu, Jammu and Kashmir, India
| | - Piyush Kumar Shukla
- Department of Computer Science and Engineering, University Institute of Technology, Rajiv Gandhi Proudyogiki Vishwavidyalaya (Technological University of Madhya Pradesh), Bhopal, Madhya Pradesh, India
| | - Partha Sarkar
- Department of Electronics and Communication Engineering, National Institute of Technology Durgapur, Durgapur, West Bengal, India
| | - Shweta Singh
- Department of Electronics and Communication, IES College of Technology, Bhopal, India
| | - Ismail Keshta
- Department of Computer Science and Information Systems, College of Applied Sciences, AlMaarefa University, Riyadh, Saudi Arabia
| | - Ali Rizwan
- Department of Industrial Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
4
|
Wilson KJ, Dhalla A, Meng Y, Tu Z, Zheng Y, Mhango P, Seydel KB, Beare NAV. Retinal imaging technologies in cerebral malaria: a systematic review. Malar J 2023; 22:139. [PMID: 37101295 PMCID: PMC10131356 DOI: 10.1186/s12936-023-04566-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 04/20/2023] [Indexed: 04/28/2023] Open
Abstract
BACKGROUND Cerebral malaria (CM) continues to present a major health challenge, particularly in sub-Saharan Africa. CM is associated with a characteristic malarial retinopathy (MR) with diagnostic and prognostic significance. Advances in retinal imaging have allowed researchers to better characterize the changes seen in MR and to make inferences about the pathophysiology of the disease. The study aimed to explore the role of retinal imaging in diagnosis and prognostication in CM; establish insights into pathophysiology of CM from retinal imaging; establish future research directions. METHODS The literature was systematically reviewed using the African Index Medicus, MEDLINE, Scopus and Web of Science databases. A total of 35 full texts were included in the final analysis. The descriptive nature of the included studies and heterogeneity precluded meta-analysis. RESULTS Available research clearly shows retinal imaging is useful both as a clinical tool for the assessment of CM and as a scientific instrument to aid the understanding of the condition. Modalities which can be performed at the bedside, such as fundus photography and optical coherence tomography, are best positioned to take advantage of artificial intelligence-assisted image analysis, unlocking the clinical potential of retinal imaging for real-time diagnosis in low-resource environments where extensively trained clinicians may be few in number, and for guiding adjunctive therapies as they develop. CONCLUSIONS Further research into retinal imaging technologies in CM is justified. In particular, co-ordinated interdisciplinary work shows promise in unpicking the pathophysiology of a complex disease.
Collapse
Affiliation(s)
- Kyle J Wilson
- Department of Eye & Vision Sciences, University of Liverpool, Liverpool, UK.
- Malawi-Liverpool-Wellcome Trust, Blantyre, Malawi.
| | - Amit Dhalla
- Department of Ophthalmology, Sheffield Teaching Hospitals, Sheffield, UK
| | - Yanda Meng
- Department of Eye & Vision Sciences, University of Liverpool, Liverpool, UK
| | - Zhanhan Tu
- School of Psychology and Vision Sciences, College of Life Science, The University of Leicester Ulverscroft Eye Unit, Robert Kilpatrick Clinical Sciences Building, Leicester Royal Infirmary, Leicester, UK
| | - Yalin Zheng
- Department of Eye & Vision Sciences, University of Liverpool, Liverpool, UK
- St. Paul's Eye Unit, Royal Liverpool University Hospitals, Liverpool, UK
- Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart and Chest Hospital, Liverpool, UK
| | - Priscilla Mhango
- Department of Ophthalmology, Kamuzu University of Health Sciences, Blantyre, Malawi
| | - Karl B Seydel
- College of Osteopathic Medicine, Michigan State University, East Lansing, MI, USA
- Blantyre Malaria Project, Kamuzu University of Health Sciences, Blantyre, Malawi
| | - Nicholas A V Beare
- Department of Eye & Vision Sciences, University of Liverpool, Liverpool, UK.
- St. Paul's Eye Unit, Royal Liverpool University Hospitals, Liverpool, UK.
| |
Collapse
|
5
|
Zhou X, Tong T, Zhong Z, Fan H, Li Z. Saliency-CCE: Exploiting colour contextual extractor and saliency-based biomedical image segmentation. Comput Biol Med 2023; 154:106551. [PMID: 36716685 DOI: 10.1016/j.compbiomed.2023.106551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Revised: 01/03/2023] [Accepted: 01/11/2023] [Indexed: 01/21/2023]
Abstract
Biomedical image segmentation is one critical component in computer-aided system diagnosis. However, various non-automatic segmentation methods are usually designed to segment target objects with single-task driven, ignoring the potential contribution of multi-task, such as the salient object detection (SOD) task and the image segmentation task. In this paper, we propose a novel dual-task framework for white blood cell (WBC) and skin lesion (SL) saliency detection and segmentation in biomedical images, called Saliency-CCE. Saliency-CCE consists of a preprocessing of hair removal for skin lesions images, a novel colour contextual extractor (CCE) module for the SOD task and an improved adaptive threshold (AT) paradigm for the image segmentation task. In the SOD task, we perform the CCE module to extract hand-crafted features through a novel colour channel volume (CCV) block and a novel colour activation mapping (CAM) block. We first exploit the CCV block to generate a target object's region of interest (ROI). After that, we employ the CAM block to yield a refined salient map as the final salient map from the extracted ROI. We propose a novel adaptive threshold (AT) strategy in the segmentation task to automatically segment the WBC and SL from the final salient map. We evaluate our proposed Saliency-CCE on the ISIC-2016, the ISIC-2017, and the SCISC datasets, which outperform representative state-of-the-art SOD and biomedical image segmentation approaches. Our code is available at https://github.com/zxg3017/Saliency-CCE.
Collapse
Affiliation(s)
- Xiaogen Zhou
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, Minjiang University, Fuzhou, P.R. China; College of Physics and Information Engineering, Fuzhou University, Fuzhou, P.R. China
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, P.R. China
| | - Zhixiong Zhong
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, Minjiang University, Fuzhou, P.R. China
| | - Haoyi Fan
- School of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou, P.R. China
| | - Zuoyong Li
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, Minjiang University, Fuzhou, P.R. China.
| |
Collapse
|
6
|
Kurup AR, Wigdahl J, Benson J, Martínez-Ramón M, Solíz P, Joshi V. Automated malarial retinopathy detection using transfer learning and multi-camera retinal images. Biocybern Biomed Eng 2023; 43:109-123. [PMID: 36685736 PMCID: PMC9851283 DOI: 10.1016/j.bbe.2022.12.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Cerebral malaria (CM) is a fatal syndrome found commonly in children less than 5 years old in Sub-saharan Africa and Asia. The retinal signs associated with CM are known as malarial retinopathy (MR), and they include highly specific retinal lesions such as whitening and hemorrhages. Detecting these lesions allows the detection of CM with high specificity. Up to 23% of CM, patients are over-diagnosed due to the presence of clinical symptoms also related to pneumonia, meningitis, or others. Therefore, patients go untreated for these pathologies, resulting in death or neurological disability. It is essential to have a low-cost and high-specificity diagnostic technique for CM detection, for which We developed a method based on transfer learning (TL). Models pre-trained with TL select the good quality retinal images, which are fed into another TL model to detect CM. This approach shows a 96% specificity with low-cost retinal cameras.
Collapse
Affiliation(s)
| | - Jeff Wigdahl
- VisionQuest Biomedical Inc., Albuquerque, NM, USA
| | | | | | - Peter Solíz
- VisionQuest Biomedical Inc., Albuquerque, NM, USA
| | | |
Collapse
|
7
|
Ma Z, Feng D, Wang J, Ma H. Retinal OCTA Image Segmentation Based on Global Contrastive Learning. SENSORS (BASEL, SWITZERLAND) 2022; 22:9847. [PMID: 36560216 PMCID: PMC9781437 DOI: 10.3390/s22249847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 12/11/2022] [Accepted: 12/12/2022] [Indexed: 06/17/2023]
Abstract
The automatic segmentation of retinal vessels is of great significance for the analysis and diagnosis of retinal related diseases. However, the imbalanced data in retinal vascular images remain a great challenge. Current image segmentation methods based on deep learning almost always focus on local information in a single image while ignoring the global information of the entire dataset. To solve the problem of data imbalance in optical coherence tomography angiography (OCTA) datasets, this paper proposes a medical image segmentation method (contrastive OCTA segmentation net, COSNet) based on global contrastive learning. First, the feature extraction module extracts the features of OCTA image input and maps them to the segment head and the multilayer perceptron (MLP) head, respectively. Second, a contrastive learning module saves the pixel queue and pixel embedding of each category in the feature map into the memory bank, generates sample pairs through a mixed sampling strategy to construct a new contrastive loss function, and forces the network to learn local information and global information simultaneously. Finally, the segmented image is fine tuned to restore positional information of deep vessels. The experimental results show the proposed method can improve the accuracy (ACC), the area under the curve (AUC), and other evaluation indexes of image segmentation compared with the existing methods. This method could accomplish segmentation tasks in imbalanced data and extend to other segmentation tasks.
Collapse
Affiliation(s)
- Ziping Ma
- College of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China
| | - Dongxiu Feng
- College of Computer Science and Engineering, North Minzu University, Yinchuan 750021, China
| | - Jingyu Wang
- College of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China
| | - Hu Ma
- College of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China
| |
Collapse
|
8
|
Ma F, Dai C, Meng J, Li Y, Zhao J, Zhang Y, Wang S, Zhang X, Cheng R. Classification-based framework for binarization on mice eye image in vivo with optical coherence tomography. JOURNAL OF BIOPHOTONICS 2022; 15:e202100336. [PMID: 35305080 DOI: 10.1002/jbio.202100336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 02/27/2022] [Accepted: 03/16/2022] [Indexed: 06/14/2023]
Abstract
Optical coherence tomography (OCT) angiography has drawn much attention in the medical imaging field. Binarization plays an important role in quantitative analysis of eye with optical coherence tomography. To address the problem of few training samples and contrast-limited scene, we proposed a new binarization framework with specific-patch SVM (SPSVM) for low-intensity OCT image, which is open and classification-based framework. This new framework contains two phases: training model and binarization threshold. In the training phase, firstly, the patches of target and background from few training samples are extracted as the ROI and the background, respectively. Then, PCA is conducted on all patches to reduce the dimension and learn the eigenvector subspace. Finally, the classification model is trained from the features of patches to get the target value of different patches. In the testing phase, the learned eigenvector subspace is conducted on the pixels of each patch. The binarization threshold of patch is obtained with the learned SVM model. We acquire a new OCT mice eye (OCT-ME) database, which is publicly available at https://mip2019.github.io/spsvm. Extensive experiments were performed to demonstrate the effectiveness of the proposed SPSVM framework.
Collapse
Affiliation(s)
- Fei Ma
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Cuixia Dai
- College Science, Shanghai Institute of Technology, Shanghai, China
| | - Jing Meng
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Ying Li
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Jingxiu Zhao
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Yuanke Zhang
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Shengbo Wang
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Xueting Zhang
- School of Computer Science, Qufu Normal University, Shandong, China
| | - Ronghua Cheng
- School of Computer Science, Qufu Normal University, Shandong, China
| |
Collapse
|
9
|
Atehortúa A, Romero E, Garreau M. Characterization of motion patterns by a spatio-temporal saliency descriptor in cardiac cine MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 218:106714. [PMID: 35263659 DOI: 10.1016/j.cmpb.2022.106714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 02/03/2022] [Accepted: 02/23/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Abnormalities of the heart motion reveal the presence of a disease. However, a quantitative interpretation of the motion is still a challenge due to the complex dynamics of the heart. This work proposes a quantitative characterization of regional cardiac motion patterns in cine magnetic resonance imaging (MRI) by a novel spatio-temporal saliency descriptor. METHOD The strategy starts by dividing the cardiac sequence into a progression of scales which are in due turn mapped to a feature space of regional orientation changes, mimicking the multi-resolution decomposition of oriented primitive changes of visual systems. These changes are estimated as the difference between a particular time and the rest of the sequence. This decomposition is then temporarily and regionally integrated for a particular orientation and then for the set of different orientations. A final spatio-temporal 4D saliency map is obtained as the summation of the previously integrated information for the available scales. The saliency dispersion of this map was computed in standard cardiac locations as a measure of the regional motion pattern and was applied to discriminate control and hypertrophic cardiomyopathy (HCM) subjects during the diastolic phase. RESULTS Salient motion patterns were estimated from an experimental set, which consisted of 3D sequences acquired by MRI from 108 subjects (33 control, 35 HCM, 20 dilated cardiomyopathy (DCM), and 20 myocardial infarction (MINF) from heterogeneous datasets). HCM and control subjects were classified by an SVM that learned the salient motion patterns estimated from the presented strategy, by achieving a 94% AUC. In addition, statistical differences (test t-student, p<0.05) were found among groups of disease in the septal and anterior ventricular segments at both the ED and ES, with salient motion characteristics aligned with existing knowledge on the diseases. CONCLUSIONS Regional wall motion abnormality in the apical, anterior, basal, and inferior segments was associated with the saliency dispersion in HCM, DCM, and MINF compared to healthy controls during the systolic and diastolic phases. This saliency analysis may be used to detect subtle changes in heart function.
Collapse
Affiliation(s)
- Angélica Atehortúa
- Universidad Nacional de Colombia, Bogotá, Colombia; Univ Rennes, Inserm, LTSI UMR 1099, Rennes F-35000, France
| | | | | |
Collapse
|
10
|
Li W, Fang W, Wang J, He Y, Deng G, Ye H, Hou Z, Chen Y, Jiang C, Shi G. A Weakly Supervised Deep Learning Approach for Leakage Detection in Fluorescein Angiography Images. Transl Vis Sci Technol 2022; 11:9. [PMID: 35262648 PMCID: PMC8934548 DOI: 10.1167/tvst.11.3.9] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
Purpose The purpose of this study was to design an automated algorithm that can detect fluorescence leakage accurately and quickly without the use of a large amount of labeled data. Methods A weakly supervised learning-based method was proposed to detect fluorescein leakage without the need for manual annotation of leakage areas. To enhance the representation of the network, a residual attention module (RAM) was designed as the core component of the proposed generator. Moreover, class activation maps (CAMs) were used to define a novel anomaly mask loss to facilitate more accurate learning of leakage areas. In addition, sensitivity, specificity, accuracy, area under the curve (AUC), and dice coefficient (DC) were used to evaluate the performance of the methods. Results The proposed method reached a sensitivity of 0.73 ± 0.04, a specificity of 0.97 ± 0.03, an accuracy of 0.95 ± 0.05, an AUC of 0.86 ± 0.04, and a DC of 0.87 ± 0.01 on the HRA data set; a sensitivity of 0.91 ± 0.02, a specificity of 0.97 ± 0.02, an accuracy of 0.96 ± 0.03, an AUC of 0.94 ± 0.02, and a DC of 0.85 ± 0.03 on Zhao's publicly available data set; and a sensitivity of 0.71 ± 0.04, a specificity of 0.99 ± 0.06, an accuracy of 0.87 ± 0.06, an AUC of 0.85 ± 0.02, and a DC of 0.78 ± 0.04 on Rabbani's publicly available data set. Conclusions The experimental results showed that the proposed method achieves better performance on fluorescence leakage detection and can detect one image within 1 second and thus has great potential value for clinical diagnosis and treatment of retina-related diseases, such as diabetic retinopathy and malarial retinopathy. Translational Relevance The proposed weakly supervised learning-based method that automates the detection of fluorescence leakage can facilitate the assessment of retinal-related diseases.
Collapse
Affiliation(s)
- Wanyue Li
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, People's Republic of China.,Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, People's Republic of China
| | - Wangyi Fang
- Department of Ophthalmology and Vision Science, Eye and ENT Hospital, Fudan University, Shanghai, People's Republic of China.,Key Laboratory of Myopia of State Health Ministry, and Key Laboratory of Visual Impairment and Restoration of Shanghai, Shanghai, People's Republic of China
| | - Jing Wang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, People's Republic of China.,Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, People's Republic of China
| | - Yi He
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, People's Republic of China.,Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, People's Republic of China
| | - Guohua Deng
- Department of Ophthalmology, the Third People's Hospital of Changzhou, Changzhou, People's Republic of China
| | - Hong Ye
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, People's Republic of China
| | - Zujun Hou
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, People's Republic of China
| | - Yiwei Chen
- Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, People's Republic of China
| | - Chunhui Jiang
- Department of Ophthalmology and Vision Science, Eye and ENT Hospital, Fudan University, Shanghai, People's Republic of China.,Key Laboratory of Myopia of State Health Ministry, and Key Laboratory of Visual Impairment and Restoration of Shanghai, Shanghai, People's Republic of China
| | - Guohua Shi
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, People's Republic of China.,Jiangsu Key Laboratory of Medical Optics, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, People's Republic of China.,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, People's Republic of China
| |
Collapse
|
11
|
Uncertainty-guided graph attention network for parapneumonic effusion diagnosis. Med Image Anal 2021; 75:102217. [PMID: 34775280 DOI: 10.1016/j.media.2021.102217] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Revised: 08/12/2021] [Accepted: 08/23/2021] [Indexed: 01/08/2023]
Abstract
Parapneumonic effusion (PPE) is a common condition that causes death in patients hospitalized with pneumonia. Rapid distinction of complicated PPE (CPPE) from uncomplicated PPE (UPPE) in Computed Tomography (CT) scans is of great importance for the management and medical treatment of PPE. However, UPPE and CPPE display similar appearances in CT scans, and it is challenging to distinguish CPPE from UPPE via a single 2D CT image, whether attempted by a human expert, or by any of the existing disease classification approaches. 3D convolutional neural networks (CNNs) can utilize the entire 3D volume for classification: however, they typically suffer from the intrinsic defect of over-fitting. Therefore, it is important to develop a method that not only overcomes the heavy memory and computational requirements of 3D CNNs, but also leverages the 3D information. In this paper, we propose an uncertainty-guided graph attention network (UG-GAT) that can automatically extract and integrate information from all CT slices in a 3D volume for classification into UPPE, CPPE, and normal control cases. Specifically, we frame the distinction of different cases as a graph classification problem. Each individual is represented as a directed graph with a topological structure, where vertices represent the image features of slices, and edges encode the spatial relationship between them. To estimate the contribution of each slice, we first extract the slice representations with uncertainty, using a Bayesian CNN: we then make use of the uncertainty information to weight each slice during the graph prediction phase in order to enable more reliable decision-making. We construct a dataset consisting of 302 chest CT volumetric data from different subjects (99 UPPE, 99 CPPE and 104 normal control cases) in this study, and to the best of our knowledge, this is the first attempt to classify UPPE, CPPE and normal cases using a deep learning method. Extensive experiments show that our approach is lightweight in demands, and outperforms accepted state-of-the-art methods by a large margin. Code is available at https://github.com/iMED-Lab/UG-GAT.
Collapse
|
12
|
Jiang Y, Chen W, Liu M, Wang Y, Meijering E. DeepRayburst for Automatic Shape Analysis of Tree-Like Structures in Biomedical Images. IEEE J Biomed Health Inform 2021; 26:2204-2215. [PMID: 34727041 DOI: 10.1109/jbhi.2021.3124514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Precise quantification of tree-like structures from biomedical images, such as neuronal shape reconstruction and retinal blood vessel caliber estimation, is increasingly important in understanding normal function and pathologic processes in biology. Some handcrafted methods have been proposed for this purpose in recent years. However, they are designed only for a specific application. In this paper, we propose a shape analysis algorithm, DeepRayburst, that can be applied to many different applications based on a Multi-Feature Rayburst Sampling (MFRS) and a Dual Channel Temporal Convolutional Network (DC-TCN). Specifically, we first generate a Rayburst Sampling (RS) core containing a set of multidirectional rays. Then the MFRS is designed by extending each ray of the RS to multiple parallel rays which extract a set of feature sequences. A Gaussian kernel is then used to fuse these feature sequences and outputs one feature sequence. Furthermore, we design a DC-TCN to make the rays terminate on the surface of tree-like structures according to the fused feature sequence. Finally, by analyzing the distribution patterns of the terminated rays, the algorithm can serve multiple shape analysis applications of tree-like structures. Experiments on three different applications, including soma shape reconstruction, neuronal shape reconstruction, and vessel caliber estimation, confirm that the proposed method outperforms other state-of-the-art shape analysis methods, which demonstrate its flexibility and robustness.
Collapse
|
13
|
Niu Y, Gu L, Zhao Y, Lu F. Explainable Diabetic Retinopathy Detection and Retinal Image Generation. IEEE J Biomed Health Inform 2021; 26:44-55. [PMID: 34495852 DOI: 10.1109/jbhi.2021.3110593] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Though deep learning has shown successful performance in classifying the label and severity stage of certain diseases, most of them give few explanations on how to make predictions. Inspired by Koch's Postulates, the foundation in evidence-based medicine (EBM) to identify the pathogen, we propose to exploit the interpretability of deep learning application in medical diagnosis. By isolating neuron activation patterns from a diabetic retinopathy (DR) detector and visualizing them, we can determine the symptoms that the DR detector identifies as evidence to make prediction. To be specific, we first define novel pathological descriptors using activated neurons of the DR detector to encode both spatial and appearance information of lesions. Then, to visualize the symptom encoded in the descriptor, we propose Patho-GAN, a new network to synthesize medically plausible retinal images. By manipulating these descriptors, we could even arbitrarily control the position, quantity, and categories of generated lesions. We also show that our synthesized images carry the symptoms directly related to diabetic retinopathy diagnosis. Our generated images are both qualitatively and quantitatively superior to the ones by previous methods. Besides, compared to existing methods that take hours to generate an image, our second level speed endows the potential to be an effective solution for data augmentation.
Collapse
|
14
|
Xu J, Shen J, Jiang Q, Wan C, Yan Z, Yang W. Research on the Segmentation of Biomarker for Chronic Central Serous Chorioretinopathy Based on Multimodal Fundus Image. DISEASE MARKERS 2021; 2021:1040675. [PMID: 34527086 PMCID: PMC8437641 DOI: 10.1155/2021/1040675] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 07/20/2021] [Indexed: 11/18/2022]
Abstract
At present, laser surgery is one of the effective ways to treat the chronic central serous chorioretinopathy (CSCR), in which the location of the leakage area is of great importance. In order to alleviate the pressure on ophthalmologists to manually label the biomarkers as well as elevate the biomarker segmentation quality, a semiautomatic biomarker segmentation method is proposed in this paper, aiming to facilitate the accurate and rapid acquisition of biomarker location information. Firstly, the multimodal fundus images are introduced into the biomarker segmentation task, which can effectively weaken the interference of highlighted vessels in the angiography images to the location of biomarkers. Secondly, a semiautomatic localization technique is adopted to reduce the search range of biomarkers, thus enabling the improvement of segmentation efficiency. On the basis of the above, the low-rank and sparse decomposition (LRSD) theory is introduced to construct the baseline segmentation scheme for segmentation of the CSCR biomarkers. Moreover, a joint segmentation framework consisting of the above method and region growing (RG) method is further designed to improve the performance of the baseline scheme. On the one hand, the LRSD is applied to offer the initial location information of biomarkers for the RG method, so as to ensure that the RG method can capture effective biomarkers. On the other hand, the biomarkers obtained by RG are fused with those gained by LRSD to make up for the defect of undersegmentation of the baseline scheme. Finally, the quantitative and qualitative ablation experiments have been carried out to demonstrate that the joint segmentation framework performs well than the baseline scheme in most cases, especially in the sensitivity and F1-score indicators, which not only confirms the effectiveness of the framework in the CSCR biomarker segmentation scene but also implies its potential application value in CSCR laser surgery.
Collapse
Affiliation(s)
- Jianguo Xu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Jianxin Shen
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Cheng Wan
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Zhipeng Yan
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Weihua Yang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing 210029, China
| |
Collapse
|
15
|
Dong J, Ai D, Fan J, Deng Q, Song H, Cheng Z, Liang P, Wang Y, Yang J. Local-global active contour model based on tensor-based representation for 3D ultrasound vessel segmentation. Phys Med Biol 2021; 66. [PMID: 33910173 DOI: 10.1088/1361-6560/abfc92] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 04/28/2021] [Indexed: 11/11/2022]
Abstract
Three-dimensional (3D) vessel segmentation can provide full spatial information about an anatomic structure to help physicians gain increased understanding of vascular structures, which plays an utmost role in many medical image-processing and analysis applications. The purpose of this paper aims to develop a 3D vessel-segmentation method that can improve segmentation accuracy in 3D ultrasound (US) images. We propose a 3D tensor-based active contour model method for accurate 3D vessel segmentation. With our method, the contrast-independent multiscale bottom-hat tensor representation and local-global information are captured. This strategy ensures the effective extraction of the boundaries of vessels from inhomogeneous and homogeneous regions without being affected by the noise and low-contrast of the 3D US images. Experimental results in clinical 3D US and public 3D Multiphoton Microscopy datasets are used for quantitative and qualitative comparison with several state-of-the-art vessel segmentation methods. Clinical experiments demonstrate that our method can achieve a smoother and more accurate boundary of the vessel object than competing methods. The mean SE, SP and ACC of the proposed method are: 0.7768 ± 0.0597, 0.9978 ± 0.0013 and 0.9971 ± 0.0015 respectively. Experiments on the public dataset show that our method can segment complex vessels in different medical images with noise and low- contrast.
Collapse
Affiliation(s)
- Jiahui Dong
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Danni Ai
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Jingfan Fan
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Qiaoling Deng
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Zhigang Cheng
- Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing 100853, People's Republic of China
| | - Ping Liang
- Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing 100853, People's Republic of China
| | - Yongtian Wang
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Jian Yang
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| |
Collapse
|
16
|
Ma Y, Hao H, Xie J, Fu H, Zhang J, Yang J, Wang Z, Liu J, Zheng Y, Zhao Y. ROSE: A Retinal OCT-Angiography Vessel Segmentation Dataset and New Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:928-939. [PMID: 33284751 DOI: 10.1109/tmi.2020.3042802] [Citation(s) in RCA: 95] [Impact Index Per Article: 23.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Optical Coherence Tomography Angiography (OCTA) is a non-invasive imaging technique that has been increasingly used to image the retinal vasculature at capillary level resolution. However, automated segmentation of retinal vessels in OCTA has been under-studied due to various challenges such as low capillary visibility and high vessel complexity, despite its significance in understanding many vision-related diseases. In addition, there is no publicly available OCTA dataset with manually graded vessels for training and validation of segmentation algorithms. To address these issues, for the first time in the field of retinal image analysis we construct a dedicated Retinal OCTA SEgmentation dataset (ROSE), which consists of 229 OCTA images with vessel annotations at either centerline-level or pixel level. This dataset with the source code has been released for public access to assist researchers in the community in undertaking research in related topics. Secondly, we introduce a novel split-based coarse-to-fine vessel segmentation network for OCTA images (OCTA-Net), with the ability to detect thick and thin vessels separately. In the OCTA-Net, a split-based coarse segmentation module is first utilized to produce a preliminary confidence map of vessels, and a split-based refined segmentation module is then used to optimize the shape/contour of the retinal microvasculature. We perform a thorough evaluation of the state-of-the-art vessel segmentation models and our OCTA-Net on the constructed ROSE dataset. The experimental results demonstrate that our OCTA-Net yields better vessel segmentation performance in OCTA than both traditional and other deep learning methods. In addition, we provide a fractal dimension analysis on the segmented microvasculature, and the statistical analysis demonstrates significant differences between the healthy control and Alzheimer's Disease group. This consolidates that the analysis of retinal microvasculature may offer a new scheme to study various neurodegenerative diseases.
Collapse
|
17
|
Unsupervised microstructure segmentation by mimicking metallurgists' approach to pattern recognition. Sci Rep 2020; 10:17835. [PMID: 33082434 PMCID: PMC7575545 DOI: 10.1038/s41598-020-74935-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 10/08/2020] [Indexed: 01/10/2023] Open
Abstract
An efficient deep learning method is presented for distinguishing microstructures of a low carbon steel. There have been numerous endeavors to reproduce the human capability of perceptually classifying different textures using machine learning methods, but this is still very challenging owing to the need for a vast labeled image dataset. In this study, we introduce an unsupervised machine learning technique based on convolutional neural networks and a superpixel algorithm for the segmentation of a low-carbon steel microstructure without the need for labeled images. The effectiveness of the method is demonstrated with optical microscopy images of steel microstructures having different patterns taken at different resolutions. In addition, several evaluation criteria for unsupervised segmentation results are investigated along with the hyperparameter optimization.
Collapse
|
18
|
Zhao Y, Zhang J, Pereira E, Zheng Y, Su P, Xie J, Zhao Y, Shi Y, Qi H, Liu J, Liu Y. Automated Tortuosity Analysis of Nerve Fibers in Corneal Confocal Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2725-2737. [PMID: 32078542 DOI: 10.1109/tmi.2020.2974499] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Precise characterization and analysis of corneal nerve fiber tortuosity are of great importance in facilitating examination and diagnosis of many eye-related diseases. In this paper we propose a fully automated method for image-level tortuosity estimation, comprising image enhancement, exponential curvature estimation, and tortuosity level classification. The image enhancement component is based on an extended Retinex model, which not only corrects imbalanced illumination and improves image contrast in an image, but also models noise explicitly to aid removal of imaging noise. Afterwards, we take advantage of exponential curvature estimation in the 3D space of positions and orientations to directly measure curvature based on the enhanced images, rather than relying on the explicit segmentation and skeletonization steps in a conventional pipeline usually with accumulated pre-processing errors. The proposed method has been applied over two corneal nerve microscopy datasets for the estimation of a tortuosity level for each image. The experimental results show that it performs better than several selected state-of-the-art methods. Furthermore, we have performed manual gradings at tortuosity level of four hundred and three corneal nerve microscopic images, and this dataset has been released for public access to facilitate other researchers in the community in carrying out further research on the same and related topics.
Collapse
|
19
|
Mao X, Zhao Y, Chen B, Ma Y, Gu Z, Gu S, Yang J, Cheng J, Liu J. Deep Learning with Skip Connection Attention for Choroid Layer Segmentation in OCT Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1641-1645. [PMID: 33018310 DOI: 10.1109/embc44109.2020.9175631] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Since the thickness and shape of the choroid layer are indicators for the diagnosis of several ophthalmic diseases, the choroid layer segmentation is an important task. There exist many challenges in segmentation of the choroid layer. In this paper, in view of the lack of context information due to the ambiguous boundaries, and the subsequent inconsistent predictions of the same category targets ascribed to the lack of context information or the large regions, a novel Skip Connection Attention (SCA) module which is integrated into the U-Shape architecture is proposed to improve the precision of choroid layer segmentation in Optical Coherence Tomography (OCT) images. The main function of the SCA module is to capture the global context in the highest level to provide the decoder with stage-by-stage guidance, to extract more context information and generate more consistent predictions for the same class targets. By integrating the SCA module into the U-Net and CE-Net, we show that the module improves the accuracy of the choroid layer segmentation.
Collapse
|
20
|
Fu H, Xu Y, Lin S, Wong DWK, Baskaran M, Mahesh M, Aung T, Liu J. Angle-Closure Detection in Anterior Segment OCT Based on Multilevel Deep Network. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:3358-3366. [PMID: 30794201 DOI: 10.1109/tcyb.2019.2897162] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Irreversible visual impairment is often caused by primary angle-closure glaucoma, which could be detected via anterior segment optical coherence tomography (AS-OCT). In this paper, an automated system based on deep learning is presented for angle-closure detection in AS-OCT images. Our system learns a discriminative representation from training data that captures subtle visual cues not modeled by handcrafted features. A multilevel deep network is proposed to formulate this learning, which utilizes three particular AS-OCT regions based on clinical priors: 1) the global anterior segment structure; 2) local iris region; and 3) anterior chamber angle (ACA) patch. In our method, a sliding window-based detector is designed to localize the ACA region, which addresses ACA detection as a regression task. Then, three parallel subnetworks are applied to extract AS-OCT representations for the global image and at clinically relevant local regions. Finally, the extracted deep features of these subnetworks are concatenated into one fully connected layer to predict the angle-closure detection result. In the experiments, our system is shown to surpass previous detection methods and other deep learning systems on two clinical AS-OCT datasets.
Collapse
|
21
|
Yan Q, Chen B, Hu Y, Cheng J, Gong Y, Yang J, Liu J, Zhao Y. Speckle reduction of OCT via super resolution reconstruction and its application on retinal layer segmentation. Artif Intell Med 2020; 106:101871. [DOI: 10.1016/j.artmed.2020.101871] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Revised: 02/17/2020] [Accepted: 05/02/2020] [Indexed: 10/24/2022]
|
22
|
Zhao Y, Xie J, Zhang H, Zheng Y, Zhao Y, Qi H, Zhao Y, Su P, Liu J, Liu Y. Retinal Vascular Network Topology Reconstruction and Artery/Vein Classification via Dominant Set Clustering. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:341-356. [PMID: 31283498 DOI: 10.1109/tmi.2019.2926492] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The estimation of vascular network topology in complex networks is important in understanding the relationship between vascular changes and a wide spectrum of diseases. Automatic classification of the retinal vascular trees into arteries and veins is of direct assistance to the ophthalmologist in terms of diagnosis and treatment of eye disease. However, it is challenging due to their projective ambiguity and subtle changes in appearance, contrast, and geometry in the imaging process. In this paper, we propose a novel method that is capable of making the artery/vein (A/V) distinction in retinal color fundus images based on vascular network topological properties. To this end, we adapt the concept of dominant set clustering and formalize the retinal blood vessel topology estimation and the A/V classification as a pairwise clustering problem. The graph is constructed through image segmentation, skeletonization, and identification of significant nodes. The edge weight is defined as the inverse Euclidean distance between its two end points in the feature space of intensity, orientation, curvature, diameter, and entropy. The reconstructed vascular network is classified into arteries and veins based on their intensity and morphology. The proposed approach has been applied to five public databases, namely INSPIRE, IOSTAR, VICAVR, DRIVE, and WIDE, and achieved high accuracies of 95.1%, 94.2%, 93.8%, 91.1%, and 91.0%, respectively. Furthermore, we have made manual annotations of the blood vessel topologies for INSPIRE, IOSTAR, VICAVR, and DRIVE datasets, and these annotations are released for public access so as to facilitate researchers in the community.
Collapse
|
23
|
Shang Q, Zhao Y, Chen Z, Hao H, Li F, Zhang X, Liu J. Automated Iris Segmentation from Anterior Segment OCT Images with Occludable Angles via Local Phase Tensor. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:4745-4749. [PMID: 31946922 DOI: 10.1109/embc.2019.8857336] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Morphological changes in the iris are one of the major causes of angle-closure glaucoma, and an anteriorly-bowed iris may be further associated with greater risk of disease progression from primary angle-closure suspect (PACS) to chronic primary angle-closure glaucoma (CPCAG). In consequence, the automated detection of abnormalities in the iris region is of great importance in the management of glaucoma. In this paper, we present a new method for the extraction of the iris region by using a local phase tensor-based curvilinear structure enhancement method, and apply it to anterior segment optical coherence tomography (AS-OCT) imagery in the presence of occludable iridocorneal angle. The proposed method is evaluated across a dataset of 200 anterior chamber angle (ACA) images, and the experimental results show that the proposed method outperforms existing state-of-the-art method in applicability, effectiveness, and accuracy.
Collapse
|
24
|
Zhao R, Zhao Y, Chen Z, Zhao Y, Yang J, Hu Y, Cheng J, Liu J. Speckle Reduction in Optical Coherence Tomography via Super-Resolution Reconstruction. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:5589-5592. [PMID: 31947122 DOI: 10.1109/embc.2019.8856445] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Reducing speckle noise from the optical coherence tomograms (OCT) of human retina is a fundamental step to a better visualization and analysis in retinal imaging, as thus to support examination, diagnosis and treatment of many eye diseases. In this study, we propose a new method for speckle reduction in OCT images using the super-resolution technology. It merges multiple images for the same scene but with sub-pixel movements and restores the missing signals in one pixel, which significantly improves the image quality. The proposed method is evaluated on a dataset of 20 OCT volumes (5120 images), through the mean square error, peak signal to noise ratio and the mean structure similarity index using high quality line-scan images as reference. The experimental results show that the proposed method outperforms existing state-of-the-art approaches in applicability, effectiveness, and accuracy.
Collapse
|
25
|
Yan Q, Zhao Y, Zheng Y, Liu Y, Zhou K, Frangi A, Liu J. Automated retinal lesion detection via image saliency analysis. Med Phys 2019; 46:4531-4544. [PMID: 31381173 DOI: 10.1002/mp.13746] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 07/11/2019] [Accepted: 07/22/2019] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND AND OBJECTIVE The detection of abnormalities such as lesions or leakage from retinal images is an important health informatics task for automated early diagnosis of diabetic and malarial retinopathy or other eye diseases, in order to prevent blindness and common systematic conditions. In this work, we propose a novel retinal lesion detection method by adapting the concepts of saliency. METHODS Retinal images are first segmented as superpixels, two new saliency feature representations: uniqueness and compactness, are then derived to represent the superpixels. The pixel level saliency is then estimated from these superpixel saliency values via a bilateral filter. These extracted saliency features form a matrix for low-rank analysis to achieve saliency detection. The precise contour of a lesion is finally extracted from the generated saliency map after removing confounding structures such as blood vessels, the optic disk, and the fovea. The main novelty of this method is that it is an effective tool for detecting different abnormalities at the pixel level from different modalities of retinal images, without the need to tune parameters. RESULTS To evaluate its effectiveness, we have applied our method to seven public datasets of diabetic and malarial retinopathy with four different types of lesions: exudate, hemorrhage, microaneurysms, and leakage. The evaluation was undertaken at the pixel level, lesion level, or image level according to ground truth availability in these datasets. CONCLUSIONS The experimental results show that the proposed method outperforms existing state-of-the-art ones in applicability, effectiveness, and accuracy.
Collapse
Affiliation(s)
- Qifeng Yan
- University of Chinese Academy of Sciences, Beijing, 100049, China.,Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China
| | - Yalin Zheng
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China.,Department of Eye and Vision Science, University of Liverpool, Liverpool, L7 8TX, UK
| | - Yonghuai Liu
- Department of Computer Science, Edge Hill University, Ormskirk, L39 4QP, UK
| | - Kang Zhou
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China.,School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| | - Alejandro Frangi
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China.,School of Computing, University of Leeds, Leeds, S2 9JT, UK
| | - Jiang Liu
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China.,Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
| |
Collapse
|
26
|
Shamsudeen FM, Raju G. An objective function based technique for devignetting fundus imagery using MST. INFORMATICS IN MEDICINE UNLOCKED 2019. [DOI: 10.1016/j.imu.2018.10.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
|
27
|
Na T, Xie J, Zhao Y, Zhao Y, Liu Y, Wang Y, Liu J. Retinal vascular segmentation using superpixel-based line operator and its application to vascular topology estimation. Med Phys 2018; 45:3132-3146. [PMID: 29744887 DOI: 10.1002/mp.12953] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Revised: 03/28/2018] [Accepted: 04/22/2018] [Indexed: 02/03/2023] Open
Abstract
PURPOSE Automatic methods of analyzing of retinal vascular networks, such as retinal blood vessel detection, vascular network topology estimation, and arteries/veins classification are of great assistance to the ophthalmologist in terms of diagnosis and treatment of a wide spectrum of diseases. METHODS We propose a new framework for precisely segmenting retinal vasculatures, constructing retinal vascular network topology, and separating the arteries and veins. A nonlocal total variation inspired Retinex model is employed to remove the image intensity inhomogeneities and relatively poor contrast. For better generalizability and segmentation performance, a superpixel-based line operator is proposed as to distinguish between lines and the edges, thus allowing more tolerance in the position of the respective contours. The concept of dominant sets clustering is adopted to estimate retinal vessel topology and classify the vessel network into arteries and veins. RESULTS The proposed segmentation method yields competitive results on three public data sets (STARE, DRIVE, and IOSTAR), and it has superior performance when compared with unsupervised segmentation methods, with accuracy of 0.954, 0.957, and 0.964, respectively. The topology estimation approach has been applied to five public databases (DRIVE,STARE, INSPIRE, IOSTAR, and VICAVR) and achieved high accuracy of 0.830, 0.910, 0.915, 0.928, and 0.889, respectively. The accuracies of arteries/veins classification based on the estimated vascular topology on three public databases (INSPIRE, DRIVE and VICAVR) are 0.90.9, 0.910, and 0.907, respectively. CONCLUSIONS The experimental results show that the proposed framework has effectively addressed crossover problem, a bottleneck issue in segmentation and vascular topology reconstruction. The vascular topology information significantly improves the accuracy on arteries/veins classification.
Collapse
Affiliation(s)
- Tong Na
- Georgetown Preparatory School, North Bethesda, 20852, USA.,Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, 315201, China.,Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Jianyang Xie
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, 315201, China.,Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, 315201, China.,Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Yifan Zhao
- School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield, MK43 0AL, UK
| | - Yue Liu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 10081, China
| | - Jiang Liu
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, 315201, China
| |
Collapse
|
28
|
Yang G, Zhuang X, Khan H, Haldar S, Nyktari E, Li L, Wage R, Ye X, Slabaugh G, Mohiaddin R, Wong T, Keegan J, Firmin D. Fully automatic segmentation and objective assessment of atrial scars for long-standing persistent atrial fibrillation patients using late gadolinium-enhanced MRI. Med Phys 2018; 45:1562-1576. [PMID: 29480931 PMCID: PMC5969251 DOI: 10.1002/mp.12832] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Revised: 02/01/2018] [Accepted: 02/17/2018] [Indexed: 01/18/2023] Open
Abstract
PURPOSE Atrial fibrillation (AF) is the most common heart rhythm disorder and causes considerable morbidity and mortality, resulting in a large public health burden that is increasing as the population ages. It is associated with atrial fibrosis, the amount and distribution of which can be used to stratify patients and to guide subsequent electrophysiology ablation treatment. Atrial fibrosis may be assessed noninvasively using late gadolinium-enhanced (LGE) magnetic resonance imaging (MRI) where scar tissue is visualized as a region of signal enhancement. However, manual segmentation of the heart chambers and of the atrial scar tissue is time consuming and subject to interoperator variability, particularly as image quality in AF is often poor. In this study, we propose a novel fully automatic pipeline to achieve accurate and objective segmentation of the heart (from MRI Roadmap data) and of scar tissue within the heart (from LGE MRI data) acquired in patients with AF. METHODS Our fully automatic pipeline uniquely combines: (a) a multiatlas-based whole heart segmentation (MA-WHS) to determine the cardiac anatomy from an MRI Roadmap acquisition which is then mapped to LGE MRI, and (b) a super-pixel and supervised learning based approach to delineate the distribution and extent of atrial scarring in LGE MRI. We compared the accuracy of the automatic analysis to manual ground truth segmentations in 37 patients with persistent long-standing AF. RESULTS Both our MA-WHS and atrial scarring segmentations showed accurate delineations of cardiac anatomy (mean Dice = 89%) and atrial scarring (mean Dice = 79%), respectively, compared to the established ground truth from manual segmentation. In addition, compared to the ground truth, we obtained 88% segmentation accuracy, with 90% sensitivity and 79% specificity. Receiver operating characteristic analysis achieved an average area under the curve of 0.91. CONCLUSION Compared with previously studied methods with manual interventions, our innovative pipeline demonstrated comparable results, but was computed fully automatically. The proposed segmentation methods allow LGE MRI to be used as an objective assessment tool for localization, visualization, and quantitation of atrial scarring and to guide ablation treatment.
Collapse
Affiliation(s)
- Guang Yang
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
- National Heart and Lung InstituteImperial College LondonLondonSW7 2AZUK
| | - Xiahai Zhuang
- School of Data ScienceFudan UniversityShanghai201203China
| | - Habib Khan
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
| | - Shouvik Haldar
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
| | - Eva Nyktari
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
| | - Lei Li
- Department of Biomedical EngineeringShanghai Jiao Tong UniversityShanghai200240China
| | - Ricardo Wage
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
| | - Xujiong Ye
- School of Computer ScienceUniversity of LincolnLincolnLN6 7TSUK
| | - Greg Slabaugh
- Department of Computer ScienceCity University LondonLondonEC1V 0HBUK
| | - Raad Mohiaddin
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
- National Heart and Lung InstituteImperial College LondonLondonSW7 2AZUK
| | - Tom Wong
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
| | - Jennifer Keegan
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
- National Heart and Lung InstituteImperial College LondonLondonSW7 2AZUK
| | - David Firmin
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
- National Heart and Lung InstituteImperial College LondonLondonSW7 2AZUK
| |
Collapse
|
29
|
Haleem MS, Han L, Hemert JV, Li B, Fleming A, Pasquale LR, Song BJ. A Novel Adaptive Deformable Model for Automated Optic Disc and Cup Segmentation to Aid Glaucoma Diagnosis. J Med Syst 2017; 42:20. [PMID: 29218460 PMCID: PMC5719827 DOI: 10.1007/s10916-017-0859-4] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Accepted: 11/07/2017] [Indexed: 11/26/2022]
Abstract
This paper proposes a novel Adaptive Region-based Edge Smoothing Model (ARESM) for automatic boundary detection of optic disc and cup to aid automatic glaucoma diagnosis. The novelty of our approach consists of two aspects: 1) automatic detection of initial optimum object boundary based on a Region Classification Model (RCM) in a pixel-level multidimensional feature space; 2) an Adaptive Edge Smoothing Update model (AESU) of contour points (e.g. misclassified or irregular points) based on iterative force field calculations with contours obtained from the RCM by minimising energy function (an approach that does not require predefined geometric templates to guide auto-segmentation). Such an approach provides robustness in capturing a range of variations and shapes. We have conducted a comprehensive comparison between our approach and the state-of-the-art existing deformable models and validated it with publicly available datasets. The experimental evaluation shows that the proposed approach significantly outperforms existing methods. The generality of the proposed approach will enable segmentation and detection of other object boundaries and provide added value in the field of medical image processing and analysis.
Collapse
Affiliation(s)
- Muhammad Salman Haleem
- School of Computing, Mathematics and Digital Technology, Manchester Metropolitan University, Manchester, M1 5GD UK
| | - Liangxiu Han
- School of Computing, Mathematics and Digital Technology, Manchester Metropolitan University, Manchester, M1 5GD UK
| | - Jano van Hemert
- Optos Plc, Queensferry House, Carnegie Business Campus, Enterprise Way, Dunfermline, Scotland, KY11 8GR UK
| | - Baihua Li
- Department of Computer Science, Loughborough University, Loughborough, LE11 3TU UK
| | - Alan Fleming
- Optos Plc, Queensferry House, Carnegie Business Campus, Enterprise Way, Dunfermline, Scotland, KY11 8GR UK
| | - Louis R. Pasquale
- Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA USA
| | - Brian J. Song
- Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA USA
| |
Collapse
|
30
|
Fang Y, Zhang C, Li J, Lei J, Perreira Da Silva M, Le Callet P. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:4684-4696. [PMID: 28678707 DOI: 10.1109/tip.2017.2721112] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).
Collapse
|