1
|
Ma L, Li G, Feng X, Fan Q, Liu L. TiCNet: Transformer in Convolutional Neural Network for Pulmonary Nodule Detection on CT Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:196-208. [PMID: 38343213 DOI: 10.1007/s10278-023-00904-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 07/19/2023] [Accepted: 08/10/2023] [Indexed: 03/02/2024]
Abstract
Lung cancer is the leading cause of cancer death. Since lung cancer appears as nodules in the early stage, detecting the pulmonary nodules in an early phase could enhance the treatment efficiency and improve the survival rate of patients. The development of computer-aided analysis technology has made it possible to automatically detect lung nodules in Computed Tomography (CT) screening. In this paper, we propose a novel detection network, TiCNet. It is attempted to embed a transformer module in the 3D Convolutional Neural Network (CNN) for pulmonary nodule detection on CT images. First, we integrate the transformer and CNN in an end-to-end structure to capture both the short- and long-range dependency to provide rich information on the characteristics of nodules. Second, we design the attention block and multi-scale skip pathways for improving the detection of small nodules. Last, we develop a two-head detector to guarantee high sensitivity and specificity. Experimental results on the LUNA16 dataset and PN9 dataset showed that our proposed TiCNet achieved superior performance compared with existing lung nodule detection methods. Moreover, the effectiveness of each module has been proven. The proposed TiCNet model is an effective tool for pulmonary nodule detection. Validation revealed that this model exhibited excellent performance, suggesting its potential usefulness to support lung cancer screening.
Collapse
Affiliation(s)
- Ling Ma
- College of Software, Nankai University, Tianjin, China
| | - Gen Li
- College of Software, Nankai University, Tianjin, China
| | - Xingyu Feng
- College of Software, Nankai University, Tianjin, China
| | - Qiliang Fan
- College of Software, Nankai University, Tianjin, China
| | - Lizhi Liu
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangdong, China.
| |
Collapse
|
2
|
Qian L, Wen C, Li Y, Hu Z, Zhou X, Xia X, Kim SH. Multi-scale context UNet-like network with redesigned skip connections for medical image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107885. [PMID: 37897988 DOI: 10.1016/j.cmpb.2023.107885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 10/22/2023] [Accepted: 10/24/2023] [Indexed: 10/30/2023]
Abstract
BACKGROUND AND OBJECTIVE Medical image segmentation has garnered significant research attention in the neural network community as a fundamental requirement for developing intelligent medical assistant systems. A series of UNet-like networks with an encoder-decoder architecture have achieved remarkable success in medical image segmentation. Among these networks, UNet2+ (UNet++) and UNet3+ (UNet+++) have introduced redesigned skip connections, dense skip connections, and full-scale skip connections, respectively, surpassing the performance of the original UNet. However, UNet2+ lacks comprehensive information obtained from the entire scale, which hampers its ability to learn organ placement and boundaries. Similarly, due to the limited number of neurons in its structure, UNet3+ fails to effectively segment small objects when trained with a small number of samples. METHOD In this study, we propose UNet_sharp (UNet#), a novel network topology named after the "#" symbol, which combines dense skip connections and full-scale skip connections. In the decoder sub-network, UNet# can effectively integrate feature maps of different scales and capture fine-grained features and coarse-grained semantics from the entire scale. This approach enhances the understanding of organ and lesion positions and enables accurate boundary segmentation. We employ deep supervision for model pruning to accelerate testing and enable mobile device deployment. Additionally, we construct two classification-guided modules to reduce false positives and improve segmentation accuracy. RESULTS Compared to current UNet-like networks, our proposed method achieves the highest Intersection over Union (IoU) values ((92.67±0.96)%, (92.38±1.29)%, (95.36±1.22)%, (74.01±2.03)%) and F1 scores ((91.64±1.86)%, (95.70±2.16)%, (97.34±2.76)%, (84.77±2.65)%) on the semantic segmentation tasks of nuclei, brain tumors, liver, and lung nodules, respectively. CONCLUSIONS The experimental results demonstrate that the reconstructed skip connections in UNet successfully incorporate multi-scale contextual semantic information. Compared to most state-of-the-art medical image segmentation models, our proposed method more accurately locates organs and lesions and precisely segments boundaries.
Collapse
Affiliation(s)
- Ledan Qian
- College of Mathematics and Physics, Wenzhou University, Wenzhou, 325035, Zhejiang, China
| | - Caiyun Wen
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325035, Zhejiang, China
| | - Yi Li
- College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, 325035, Zhejiang, China
| | - Zhongyi Hu
- Key Laboratory of Intelligent Image Processing and Analysis, Wenzhou, 325035, Zhejiang, China
| | - Xiao Zhou
- Information Technology Center, Wenzhou University, Wenzhou, 325035, Zhejiang, China.
| | - Xiaonyu Xia
- College of Mathematics and Physics, Wenzhou University, Wenzhou, 325035, Zhejiang, China
| | - Soo-Hyung Kim
- College of AI Convergence, Chonnam National University, Gwangju, 61186, Korea
| |
Collapse
|
3
|
Huang JL, Sun Y, Wu ZH, Zhu HJ, Xia GJ, Zhu XS, Wu JH, Zhang KH. Differential diagnosis of hepatocellular carcinoma and intrahepatic cholangiocarcinoma based on spatial and channel attention mechanisms. J Cancer Res Clin Oncol 2023; 149:10161-10168. [PMID: 37268850 DOI: 10.1007/s00432-023-04935-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 05/23/2023] [Indexed: 06/04/2023]
Abstract
BACKGROUND The pre-operative non-invasive differential diagnosis of hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma (ICC) mainly depends on imaging. However, the accuracy of conventional imaging and radiomics methods in differentiating between the two carcinomas is unsatisfactory. In this study, we aimed to establish a novel deep learning model based on computed tomography (CT) images to provide an effective and non-invasive pre-operative differential diagnosis method for HCC and ICC. MATERIALS AND METHODS We retrospectively investigated the CT images of 395 HCC patients and 99 ICC patients who were diagnosed based on pathological analysis. To differentiate between HCC and ICC we developed a deep learning model called CSAM-Net based on channel and spatial attention mechanisms. We compared the proposed CSAM-Net with conventional radiomic models such as conventional logistic regression, least absolute shrinkage and selection operator regression, support vector machine, and random forest models. RESULTS With respect to differentiating between HCC and ICC, the CSAM-Net model showed area under the receiver operating characteristic curve (AUC) values of 0.987 (accuracy = 0.939), 0.969 (accuracy = 0.914), and 0.959 (accuracy = 0.912) for the training, validation, and test sets, respectively, which were significantly higher than those of the conventional radiomics models (0.736-0.913 [accuracy = 0.735-0.912], 0.602-0.828 [accuracy = 0.647-0.818], and 0.638-0.845 [accuracy = 0.618-0.849], respectively. The decision curve analysis showed a high net benefit of the CSAM-Net model, which suggests potential efficacy in differentiating between HCC and ICC in the diagnosis of liver cancers. CONCLUSIONS The proposed CSAM-Net model based on channel and spatial attention mechanisms provides an effective and non-invasive tool for the differential diagnosis of HCC and ICC on CT images, and has potential applications in diagnosis of liver cancers.
Collapse
Affiliation(s)
- Ji-Lan Huang
- Department of Radiology, First Affiliated Hospital of Nanchang University, Nanchang, 330006, China
| | - Ying Sun
- Department of Gastroenterology, Fuzhou First General Hospital Affiliated With Fujian Medical University, Fuzhou, 350004, China
| | - Zhi-Heng Wu
- School of Information Engineering, Nanchang University, No.999, Xuefu Road, Nanchang, 330031, China
| | - Hui-Jun Zhu
- School of Information Engineering, Nanchang University, No.999, Xuefu Road, Nanchang, 330031, China
| | - Guo-Jin Xia
- Department of Radiology, First Affiliated Hospital of Nanchang University, Nanchang, 330006, China
| | - Xi-Shun Zhu
- School of Advanced Manufacturing, Nanchang University, Nanchang, 330031, China
| | - Jian-Hua Wu
- School of Information Engineering, Nanchang University, No.999, Xuefu Road, Nanchang, 330031, China.
| | - Kun-He Zhang
- Department of Gastroenterology, Jiangxi Institute of Gastroenterology and Hepatology, First Affiliated Hospital of Nanchang University, No.17, Yongwai Zheng Street, Nanchang, 330006, China.
| |
Collapse
|
4
|
Multi-head deep learning framework for pulmonary disease detection and severity scoring with modified progressive learning. Biomed Signal Process Control 2023; 85:104855. [PMID: 36987448 PMCID: PMC10036214 DOI: 10.1016/j.bspc.2023.104855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 03/04/2023] [Accepted: 03/11/2023] [Indexed: 03/26/2023]
Abstract
Chest X-rays (CXR) are the most commonly used imaging methodology in radiology to diagnose pulmonary diseases with close to 2 billion CXRs taken every year. The recent upsurge of COVID-19 and its variants accompanied by pneumonia and tuberculosis can be fatal in some cases and lives could be saved through early detection and appropriate intervention for the advanced cases. Thus CXRs can be used for an automated severity grading of pulmonary diseases that can aid radiologists in making better and informed diagnoses. In this article, we propose a single framework for disease classification and severity scoring produced by segmenting the lungs into six regions. We present a modified progressive learning technique in which the amount of augmentations at each step is capped. Our base network in the framework is first trained using modified progressive learning and can then be tweaked for new data sets. Furthermore, the segmentation task makes use of an attention map generated within and by the network itself. This attention mechanism allows to achieve segmentation results that are on par with networks having an order of magnitude or more parameters. We also propose severity score grading for 4 thoracic diseases that can provide a single-digit score corresponding to the spread of opacity in different lung segments with the help of radiologists. The proposed framework is evaluated using the BRAX data set for segmentation and classification into six classes with severity grading for a subset of the classes. On the BRAX validation data set, we achieve F1 scores of 0.924 and 0.939 without and with fine-tuning, respectively. A mean matching score of 80.8% is obtained for severity score grading while an average area under receiver operating characteristic curve of 0.88 is achieved for classification.
Collapse
|
5
|
Zhang Z, Tie Y, Zhang D, Liu F, Qi L. Quantum-Involution inspire false positive reduction in pulmonary nodule detection. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023]
|
6
|
Shen Z, Cao P, Yang J, Zaiane OR. WS-LungNet: A two-stage weakly-supervised lung cancer detection and diagnosis network. Comput Biol Med 2023; 154:106587. [PMID: 36709519 DOI: 10.1016/j.compbiomed.2023.106587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 01/13/2023] [Accepted: 01/22/2023] [Indexed: 01/26/2023]
Abstract
Computer-aided lung cancer diagnosis (CAD) system on computed tomography (CT) helps radiologists guide preoperative planning and prognosis assessment. The flexibility and scalability of deep learning methods are limited in lung CAD. In essence, two significant challenges to be solved are (1) Label scarcity due to cost annotations of CT images by experienced domain experts, and (2) Label inconsistency between the observed nodule malignancy and the patients' pathology evaluation. These two issues can be considered weak label problems. We address these issues in this paper by introducing a weakly-supervised lung cancer detection and diagnosis network (WS-LungNet), consisting of a semi-supervised computer-aided detection (Semi-CADe) that can segment 3D pulmonary nodules based on unlabeled data through adversarial learning to reduce label scarcity, as well as a cross-nodule attention computer-aided diagnosis (CNA-CADx) for evaluating malignancy at the patient level by modeling correlations between nodules via cross-attention mechanisms and thereby eliminating label inconsistency. Through extensive evaluations on the LIDC-IDRI public database, we show that our proposed method achieves 82.99% competition performance metric (CPM) on pulmonary nodule detection and 88.63% area under the curve (AUC) on lung cancer diagnosis. Extensive experiments demonstrate the advantage of WS-LungNet on nodule detection and malignancy evaluation tasks. Our promising results demonstrate the benefits and flexibility of the semi-supervised segmentation with adversarial learning and the nodule instance correlation learning with the attention mechanism. The results also suggest that making use of the unlabeled data and taking the relationship among nodules in a case into account are essential for lung cancer detection and diagnosis.
Collapse
Affiliation(s)
- Zhiqiang Shen
- College of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
| | - Peng Cao
- College of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China.
| | - Jinzhu Yang
- College of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
| | - Osmar R Zaiane
- Alberta Machine Intelligence Institute, University of Alberta, Canada
| |
Collapse
|
7
|
Hao K, Cai A, Feng X, Ma L, Zhu J, Wang M, Zhang Y, Fei B. Lung nodule false positive reduction using a central attention convolutional neural network on imbalanced data. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2023; 12466:124661X. [PMID: 38487347 PMCID: PMC10940051 DOI: 10.1117/12.2654216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/17/2024]
Abstract
Computer-aided detection systems for lung nodules play an important role in the early diagnosis and treatment process. False positive reduction is a significant component in pulmonary nodule detection. To address the visual similarities between nodules and false positives in CT images and the problem of two-class imbalanced learning, we propose a central attention convolutional neural network on imbalanced data (CACNNID) to distinguish nodules from a large number of false positive candidates. To solve the imbalanced data problem, we consider density distribution, data augmentation, noise reduction, and balanced sampling for making the network well-learned. During the network training, we design the model to pay high attention to the central information and minimize the influence of irrelevant edge information for extracting the discriminant features. The proposed model has been evaluated on the public dataset LUNA16 and achieved a mean sensitivity of 92.64%, specificity of 98.71%, accuracy of 98.69%, and AUC of 95.67%. The experimental results indicate that our model can achieve satisfactory performance in false positive reduction.
Collapse
Affiliation(s)
- Kexin Hao
- College of Software, Nankai University
| | - Annan Cai
- College of Software, Nankai University
| | | | - Ling Ma
- College of Software, Nankai University
| | | | | | - Yun Zhang
- Department of Radiology, Sun Yat-sen University Cancer Center
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas
| |
Collapse
|
8
|
Jin H, Yu C, Gong Z, Zheng R, Zhao Y, Fu Q. Machine learning techniques for pulmonary nodule computer-aided diagnosis using CT images: A systematic review. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
9
|
Gu Z, Li Y, Luo H, Zhang C, Du H. Cross attention guided multi-scale feature fusion for false-positive reduction in pulmonary nodule detection. Comput Biol Med 2022; 151:106302. [PMID: 36401972 DOI: 10.1016/j.compbiomed.2022.106302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 10/24/2022] [Accepted: 11/06/2022] [Indexed: 11/10/2022]
Abstract
False-positive reduction is a crucial step of computer-aided diagnosis (CAD) system for pulmonary nodules detection and it plays an important role in lung cancer diagnosis. In this paper, we propose a novel cross attention guided multi-scale feature fusion method for false-positive reduction in pulmonary nodule detection. Specifically, a 3D SENet50 fed with a candidate nodule cube is applied as the backbone to acquire multi-scale coarse features. Then, the coarse features are refined and fused by the multi-scale fusion part to achieve a better feature extraction result. Finally, a 3D spatial pyramid pooling module is used to enhance receptive field and a distributed aligned linear classifier is applied to get the confidence score. In addition, each of the five nodule cubes with different sizes centering on every testing nodule position is fed into the proposed framework to obtain a confidence score separately and a weighted fusion method is used to improve the generalization performance of the model. Extensive experiments are conducted to demonstrate the effectiveness of the classification performance of the proposed model. The data used in our work is from the LUNA16 pulmonary nodule detection challenge. In this data set, the number of true-positive pulmonary nodules is 1,557, while the number of false-positive ones is 753,418. The new method is evaluated on the LUNA16 dataset and achieves the score of the competitive performance metric (CPM) 84.8%.
Collapse
Affiliation(s)
- Zhongxuan Gu
- Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, 1800 Lihu Avenue, Wuxi, 214122, Jiangsu, China
| | - Yueyang Li
- Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, 1800 Lihu Avenue, Wuxi, 214122, Jiangsu, China.
| | - Haichi Luo
- College of Internet of Things Engineering, Jiangnan University, 1800 Lihu Avenue, Wuxi, 214122, Jiangsu, China
| | - Caidi Zhang
- Department of Respiration, The Affiliated Hospital of Jiangnan University, 1000 Hefeng Road, Wuxi, 214122, Jiangsu, China
| | - Hongqun Du
- Department of Respiration, The Affiliated Hospital of Jiangnan University, 1000 Hefeng Road, Wuxi, 214122, Jiangsu, China
| |
Collapse
|
10
|
Research on lung nodule recognition algorithm based on deep feature fusion and MKL-SVM-IPSO. Sci Rep 2022; 12:17403. [PMID: 36257988 PMCID: PMC9579155 DOI: 10.1038/s41598-022-22442-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 10/14/2022] [Indexed: 01/10/2023] Open
Abstract
Lung CAD system can provide auxiliary third-party opinions for doctors, improve the accuracy of lung nodule recognition. The selection and fusion of nodule features and the advancement of recognition algorithms are crucial improving lung CAD systems. Based on the HDL model, this paper mainly focuses on the three key algorithms of feature extraction, feature fusion and nodule recognition of lung CAD system. First, CBAM is embedded into VGG16 and VGG19, and feature extraction models AE-VGG16 and AE-VGG19 are constructed, so that the network can pay more attention to the key feature information in nodule description. Then, feature dimensionality reduction based on PCA and feature fusion based on CCA are sequentially performed on the extracted depth features to obtain low-dimensional fusion features. Finally, the fusion features are input into the proposed MKL-SVM-IPSO model based on the improved Particle Swarm Optimization algorithm to speed up the training speed, get the global optimal parameter group. The public dataset LUNA16 was selected for the experiment. The results show that the accuracy of lung nodule recognition of the proposed lung CAD system can reach 99.56%, and the sensitivity and F1-score can reach 99.3% and 0.9965, respectively, which can reduce the possibility of false detection and missed detection of nodules.
Collapse
|
11
|
Artificial Intelligence Algorithm-Based Feature Extraction of Computed Tomography Images and Analysis of Benign and Malignant Pulmonary Nodules. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:5762623. [PMID: 36156972 PMCID: PMC9492375 DOI: 10.1155/2022/5762623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/03/2022] [Revised: 08/15/2022] [Accepted: 08/25/2022] [Indexed: 11/17/2022]
Abstract
This study was aimed to explore the effect of CT image feature extraction of pulmonary nodules based on an artificial intelligence algorithm and the image performance of benign and malignant pulmonary nodules. In this study, the CT images of pulmonary nodules were collected as the research object, and the lung nodule feature extraction model based on expectation maximization (EM) was used to extract the image features. The Dice similarity coefficient, accuracy, benign and malignant nodule edges, internal signs, and adjacent structures were compared and analyzed to obtain the extraction effect of this feature extraction model and the image performance of benign and malignant pulmonary nodules. The results showed that the detection sensitivity of pulmonary nodules in this model was 0.955, and the pulmonary nodules and blood vessels were well preserved in the image. The probability of burr sign detection in the malignant group was 73.09% and that in the benign group was 8.41%. The difference was statistically significant (P < 0.05). The probability of malignant component leaf sign (69.96%) was higher than that of a benign component leaf sign (0), and the difference was statistically significant (P < 0.05). The probability of cavitation signs in the malignant group (59.19%) was higher than that in the benign group (3.74%), and the probability of blood vessel collection signs in the malignant group (74.89%) was higher than that in the benign group (11.21%), with statistical significance (P < 0.05). The probability of the pleural traction sign in the malignant group was 17.49% higher than that in the benign group (4.67%), and the difference was statistically significant (P < 0.05). In summary, the feature extraction effect of CT images based on the EM algorithm was ideal. Imaging findings, such as the burr sign, lobulation sign, vacuole sign, vascular bundle sign, and pleural traction sign, can be used as indicators to distinguish benign and malignant nodules.
Collapse
|
12
|
Local Structure Awareness-Based Retinal Microaneurysm Detection with Multi-Feature Combination. Biomedicines 2022; 10:biomedicines10010124. [PMID: 35052803 PMCID: PMC8773350 DOI: 10.3390/biomedicines10010124] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Revised: 12/31/2021] [Accepted: 01/03/2022] [Indexed: 01/02/2023] Open
Abstract
Retinal microaneurysm (MA) is the initial symptom of diabetic retinopathy (DR). The automatic detection of MA is helpful to assist doctors in diagnosis and treatment. Previous algorithms focused on the features of the target itself; however, the local structural features of the target and background are also worth exploring. To achieve MA detection, an efficient local structure awareness-based retinal MA detection with the multi-feature combination (LSAMFC) is proposed in this paper. We propose a novel local structure feature called a ring gradient descriptor (RGD) to describe the structural differences between an object and its surrounding area. Then, a combination of RGD with the salience and texture features is used by a Gradient Boosting Decision Tree (GBDT) for candidate classification. We evaluate our algorithm on two public datasets, i.e., the e-ophtha MA dataset and retinopathy online challenge (ROC) dataset. The experimental results show that the performance of the trained model significantly improved after combining traditional features with RGD, and the area under the receiver operating characteristic curve (AUC) values in the test results of the datasets e-ophtha MA and ROC increased from 0.9615 to 0.9751 and from 0.9066 to 0.9409, respectively.
Collapse
|