1
|
Li J, Zhao J, Wang Y, Zhu J, Wei Y, Zhu J, Li X, Yan S, Zhang Q. A colonic polyps detection algorithm based on an improved YOLOv5s. Sci Rep 2025; 15:6852. [PMID: 40011590 DOI: 10.1038/s41598-025-91467-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Accepted: 02/20/2025] [Indexed: 02/28/2025] Open
Abstract
Colon cancer is a prevalent malignancy, substantially it prevented most effectively from killing patients through early endoscopic detection. With the rapid development of artificial intelligence technology, the early diagnosis rate of colonic polyps achieves greater clinical efficacy for colon cancer by applying target detection algorithms to colonoscopy images. This paper presents two outcomes achieved through the application of the improved YOLOv5s algorithm with annotated microscopy images of clinical cases and publicly available polyp image data: (1) enhancement of the C3(Cross Stage Partial Networks) module with multiple layers to C3SE(Cross Stage Partial Networks with Squeeze-and-Excitation) via the attention mechanism SE (squeeze-and-excitation) and (2) fusion of higher-level features utilizing BiFPN (the weighted bi-directional feature pyramid network). Experimental comparisons are performed based on a new image dataset of colonic polyps among more than 6 target detection algorithms to validate the better detection capability. The tests indicate that the YOLOv5s + BiFPN and YOLOv5s-1st-2nd-C3SE models exhibit enhancements of detection capability compared to the YOLOv5 algorithm according to the main indicators of the mAP, accuracy, and recall. The YOLOv5s + SEBiFPN model demonstrate a substantial improvement over the YOLOv5s algorithm, and establishing a benchmark technology for advancing computer-assisted diagnostic systems is feasible.
Collapse
Affiliation(s)
- Jianjun Li
- College of Mechanical and Electrical Engineering, China Jiliang University, Hangzhou, 310018, China
- Zhejiang-Belarus Joint Laboratory of Intelligent Equipment and System for Water Conservancy and Hydropower Safety Monitoring, College of Electrical Engineering, Zhejiang University of Water Resources and Electric Power, Hangzhou, 310018, China
| | - Jinhui Zhao
- Zhejiang-Belarus Joint Laboratory of Intelligent Equipment and System for Water Conservancy and Hydropower Safety Monitoring, College of Electrical Engineering, Zhejiang University of Water Resources and Electric Power, Hangzhou, 310018, China.
| | - Yifan Wang
- College of Mechanical and Electrical Engineering, China Jiliang University, Hangzhou, 310018, China
| | - Jinhui Zhu
- Department of Hepato-Pancreato-Biliary (HPB) Surgery, Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310009, China
| | - Yanhong Wei
- College of Mechanical and Electrical Engineering, China Jiliang University, Hangzhou, 310018, China
| | - Junjiang Zhu
- College of Mechanical and Electrical Engineering, China Jiliang University, Hangzhou, 310018, China
| | - Xiaolu Li
- College of Mechanical and Electrical Engineering, China Jiliang University, Hangzhou, 310018, China
| | - Shubin Yan
- Zhejiang-Belarus Joint Laboratory of Intelligent Equipment and System for Water Conservancy and Hydropower Safety Monitoring, College of Electrical Engineering, Zhejiang University of Water Resources and Electric Power, Hangzhou, 310018, China
| | - Qichun Zhang
- School of Creative and Digital Industries, Buckinghamshire New University, High Wycombe, HP11 2JZ, UK
| |
Collapse
|
2
|
Wang YP, Jheng YC, Hou MC, Lu CL. The optimal labelling method for artificial intelligence-assisted polyp detection in colonoscopy. J Formos Med Assoc 2024:S0929-6646(24)00582-5. [PMID: 39730273 DOI: 10.1016/j.jfma.2024.12.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 12/12/2024] [Accepted: 12/16/2024] [Indexed: 12/29/2024] Open
Abstract
BACKGROUND The methodology in colon polyp labeling in establishing database for ma-chine learning is not well-described and standardized. We aimed to find out the best annotation method to generate the most accurate model in polyp detection. METHODS 3542 colonoscopy polyp images were obtained from endoscopy database of a tertiary medical center. Two experienced endoscopists manually annotated the polyp with (1) exact outline segmentation and (2) using a standard rectangle box close to the polyp margin, and extending 10%, 20%, 30%, 40% and 50% longer in both width and length of the standard rectangle for AI modeling setup. The images were randomly divided into training and validation sets in 4:1 ratio. U-Net convolutional network architecture was used to develop automatic segmentation machine learning model. Another unrelated verification set was established to evaluate the performance of polyp detection by different segmentation methods. RESULTS Extending the bounding box to 20% of the polyp margin represented the best performance in accuracy (95.42%), sensitivity (94.84%) and F1-score (95.41%). Exact outline segmentation model showed the excellent performance in sensitivity (99.6%) and the worst precision (77.47%). The 20% model was the best among the 6 models. (confidence interval = 0.957-0.985; AUC = 0.971). CONCLUSIONS Labelling methodology affect the predictability of AI model in polyp detection. Extending the bounding box to 20% of the polyp margin would result in the best polyp detection predictive model based on AUC data. It is mandatory to establish a standardized way in colon polyp labeling for comparison of the precision of different AI models.
Collapse
Affiliation(s)
- Yen-Po Wang
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taiwan; Division of Gastroenterology, Taipei Veterans General Hospital, Taiwan; Institute of Brain Science, National Yang Ming Chiao Tung University School of Medicine, Taiwan; Faculty of Medicine, National Yang Ming Chiao Tung University School of Medicine, Taiwan
| | - Ying-Chun Jheng
- Department of Medical Research, Taipei Veterans General Hospital, Taiwan; Big Data Center, Taipei Veterans General Hospital, Taiwan
| | - Ming-Chih Hou
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taiwan; Division of Gastroenterology, Taipei Veterans General Hospital, Taiwan; Faculty of Medicine, National Yang Ming Chiao Tung University School of Medicine, Taiwan
| | - Ching-Liang Lu
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taiwan; Division of Gastroenterology, Taipei Veterans General Hospital, Taiwan; Institute of Brain Science, National Yang Ming Chiao Tung University School of Medicine, Taiwan.
| |
Collapse
|
3
|
Nie MY, An XW, Xing YC, Wang Z, Wang YQ, Lü JQ. Artificial intelligence algorithms for real-time detection of colorectal polyps during colonoscopy: a review. Am J Cancer Res 2024; 14:5456-5470. [PMID: 39659923 PMCID: PMC11626263 DOI: 10.62347/bziz6358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Accepted: 11/14/2024] [Indexed: 12/12/2024] Open
Abstract
Colorectal cancer (CRC) is one of the most common cancers worldwide. Early detection and removal of colorectal polyps during colonoscopy are crucial for preventing such cancers. With the development of artificial intelligence (AI) technology, it has become possible to detect and localize colorectal polyps in real time during colonoscopy using computer-aided diagnosis (CAD). This provides a reliable endoscopist reference and leads to more accurate diagnosis and treatment. This paper reviews AI-based algorithms for real-time detection of colorectal polyps, with a particular focus on the development of deep learning algorithms aimed at optimizing both efficiency and correctness. Furthermore, the challenges and prospects of AI-based colorectal polyp detection are discussed.
Collapse
Affiliation(s)
- Meng-Yuan Nie
- Center for Advanced Laser Technology, Hebei University of TechnologyTianjin, China
- Hebei Key Laboratory of Advanced Laser Technology and EquipmentTianjin, China
| | - Xin-Wei An
- Center for Advanced Laser Technology, Hebei University of TechnologyTianjin, China
- Hebei Key Laboratory of Advanced Laser Technology and EquipmentTianjin, China
| | - Yun-Can Xing
- Department of Colorectal Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijing, China
| | - Zheng Wang
- Department of Colorectal Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijing, China
| | - Yan-Qiu Wang
- Langfang Traditional Chinese Medicine HospitalLangfang, Hebei, China
| | - Jia-Qi Lü
- Center for Advanced Laser Technology, Hebei University of TechnologyTianjin, China
- Hebei Key Laboratory of Advanced Laser Technology and EquipmentTianjin, China
| |
Collapse
|
4
|
李 奕, 赵 佳, 余 若, 刘 辉, 梁 爽, 谷 宇. [Colon polyp detection based on multi-scale and multi-level feature fusion and lightweight convolutional neural network]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2024; 41:911-918. [PMID: 39462658 PMCID: PMC11527748 DOI: 10.7507/1001-5515.202312014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 07/27/2024] [Indexed: 10/29/2024]
Abstract
Early diagnosis and treatment of colorectal polyps are crucial for preventing colorectal cancer. This paper proposes a lightweight convolutional neural network for the automatic detection and auxiliary diagnosis of colorectal polyps. Initially, a 53-layer convolutional backbone network is used, incorporating a spatial pyramid pooling module to achieve feature extraction with different receptive field sizes. Subsequently, a feature pyramid network is employed to perform cross-scale fusion of feature maps from the backbone network. A spatial attention module is utilized to enhance the perception of polyp image boundaries and details. Further, a positional pattern attention module is used to automatically mine and integrate key features across different levels of feature maps, achieving rapid, efficient, and accurate automatic detection of colorectal polyps. The proposed model is evaluated on a clinical dataset, achieving an accuracy of 0.9982, recall of 0.9988, F1 score of 0.9984, and mean average precision (mAP) of 0.9953 at an intersection over union (IOU) threshold of 0.5, with a frame rate of 74 frames per second and a parameter count of 9.08 M. Compared to existing mainstream methods, the proposed method is lightweight, has low operating configuration requirements, high detection speed, and high accuracy, making it a feasible technical method and important tool for the early detection and diagnosis of colorectal cancer.
Collapse
Affiliation(s)
- 奕扬 李
- 首都医科大学 生物医学工程学院(北京 100069)School of Biomedical Engineering, Capital Medical University, Beijing 100069, P. R. China
| | - 佳漪 赵
- 首都医科大学 生物医学工程学院(北京 100069)School of Biomedical Engineering, Capital Medical University, Beijing 100069, P. R. China
| | - 若伊 余
- 首都医科大学 生物医学工程学院(北京 100069)School of Biomedical Engineering, Capital Medical University, Beijing 100069, P. R. China
| | - 辉翔 刘
- 首都医科大学 生物医学工程学院(北京 100069)School of Biomedical Engineering, Capital Medical University, Beijing 100069, P. R. China
- 首都医科大学 基础医学院(北京 100069)School of Basic Medical Sciences, Capital Medical University, Beijing 100069, P. R. China
| | - 爽 梁
- 首都医科大学 生物医学工程学院(北京 100069)School of Biomedical Engineering, Capital Medical University, Beijing 100069, P. R. China
- 首都医科大学 基础医学院(北京 100069)School of Basic Medical Sciences, Capital Medical University, Beijing 100069, P. R. China
- 北京信息科技大学 自动化学院(北京 100192)School of Automation, Beijing Information Science and Technology University, Beijing 100192, P. R. China
| | - 宇 谷
- 首都医科大学 生物医学工程学院(北京 100069)School of Biomedical Engineering, Capital Medical University, Beijing 100069, P. R. China
- 首都医科大学 基础医学院(北京 100069)School of Basic Medical Sciences, Capital Medical University, Beijing 100069, P. R. China
- 北京信息科技大学 自动化学院(北京 100192)School of Automation, Beijing Information Science and Technology University, Beijing 100192, P. R. China
| |
Collapse
|
5
|
Wang L, Wan J, Meng X, Chen B, Shao W. MCH-PAN: gastrointestinal polyp detection model integrating multi-scale feature information. Sci Rep 2024; 14:23382. [PMID: 39379452 PMCID: PMC11461898 DOI: 10.1038/s41598-024-74609-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 09/27/2024] [Indexed: 10/10/2024] Open
Abstract
The rise of object detection models has brought new breakthroughs to the development of clinical decision support systems. However, in the field of gastrointestinal polyp detection, there are still challenges such as uncertainty in polyp identification and inadequate coping with polyp scale variations. To address these challenges, this paper proposes a novel gastrointestinal polyp object detection model. The model can automatically identify polyp regions in gastrointestinal images and accurately label them. In terms of design, the model integrates multi-channel information to enhance the ability and robustness of channel feature expression, thus better coping with the complexity of polyp structures. At the same time, a hierarchical structure is constructed in the model to enhance the model's adaptability to multi-scale targets, effectively addressing the problem of large-scale variations in polyps. Furthermore, a channel attention mechanism is designed in the model to improve the accuracy of target positioning and reduce uncertainty in diagnosis. By integrating these strategies, the proposed gastrointestinal polyp object detection model can achieve accurate polyp detection, providing clinicians with reliable and valuable references. Experimental results show that the model exhibits superior performance in gastrointestinal polyp detection, which helps improve the diagnostic level of digestive system diseases and provides useful references for related research fields.
Collapse
Affiliation(s)
- Ling Wang
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China.
| | - Jingjing Wan
- Department of Gastroenterology, The Second People's Hospital of Huai'an, The Affiliated Huai'an Hospital of Xuzhou Medical University, Huaian, 223002, China.
| | - Xianchun Meng
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China
| | - Bolun Chen
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China
| | - Wei Shao
- Nanjing University of Aeronautics and Astronautics Shenzhen Research Institute, Shenzhen, 518038, China.
| |
Collapse
|
6
|
Huang J, Zeng J, Peng J, Lyu M, Liu S. Endoscopic colorectal polyp detection based on improved YOLOv8. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039266 DOI: 10.1109/embc53108.2024.10781871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Colorectal tumors typically arise from colorectal polyps and are commonly identified through colonoscopy during clinical examinations. Manual detection of polyps, crucial for distinguishing between benign and malignant cases, suffers from a limited detection rate of approximately 25%, influenced by subjective factors and other variables. This study leverages advancements in deep learning and computer detection technology to explore the application of artificial intelligence in polyp detection. Specifically, we employ an improved YOLOv8 model for polyp detection to boost the detection capabilities. The proposed method demonstrates superior accuracy and efficiency compared to existing artificial intelligence approaches.
Collapse
|
7
|
Development and deployment of Computer-aided Real-Time feedback for improving quality of colonoscopy in a Multi-Center clinical trial. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
|
8
|
Souaidi M, Lafraxo S, Kerkaou Z, El Ansari M, Koutti L. A Multiscale Polyp Detection Approach for GI Tract Images Based on Improved DenseNet and Single-Shot Multibox Detector. Diagnostics (Basel) 2023; 13:diagnostics13040733. [PMID: 36832221 PMCID: PMC9955440 DOI: 10.3390/diagnostics13040733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 02/07/2023] [Accepted: 02/09/2023] [Indexed: 02/17/2023] Open
Abstract
Small bowel polyps exhibit variations related to color, shape, morphology, texture, and size, as well as to the presence of artifacts, irregular polyp borders, and the low illumination condition inside the gastrointestinal GI tract. Recently, researchers developed many highly accurate polyp detection models based on one-stage or two-stage object detector algorithms for wireless capsule endoscopy (WCE) and colonoscopy images. However, their implementation requires a high computational power and memory resources, thus sacrificing speed for an improvement in precision. Although the single-shot multibox detector (SSD) proves its effectiveness in many medical imaging applications, its weak detection ability for small polyp regions persists due to the lack of information complementary between features of low- and high-level layers. The aim is to consecutively reuse feature maps between layers of the original SSD network. In this paper, we propose an innovative SSD model based on a redesigned version of a dense convolutional network (DenseNet) which emphasizes multiscale pyramidal feature maps interdependence called DC-SSDNet (densely connected single-shot multibox detector). The original backbone network VGG-16 of the SSD is replaced with a modified version of DenseNet. The DenseNet-46 front stem is improved to extract highly typical characteristics and contextual information, which improves the model's feature extraction ability. The DC-SSDNet architecture compresses unnecessary convolution layers of each dense block to reduce the CNN model complexity. Experimental results showed a remarkable improvement in the proposed DC-SSDNet to detect small polyp regions achieving an mAP of 93.96%, F1-score of 90.7%, and requiring less computational time.
Collapse
Affiliation(s)
- Meryem Souaidi
- LABSIV, Computer Science, Faculty of Sciences, University Ibn Zohr, Agadir 80000, Morocco
- Correspondence:
| | - Samira Lafraxo
- LABSIV, Computer Science, Faculty of Sciences, University Ibn Zohr, Agadir 80000, Morocco
| | - Zakaria Kerkaou
- LABSIV, Computer Science, Faculty of Sciences, University Ibn Zohr, Agadir 80000, Morocco
| | - Mohamed El Ansari
- LABSIV, Computer Science, Faculty of Sciences, University Ibn Zohr, Agadir 80000, Morocco
- Informatics and Applications Laboratory, Computer Science Department, Faculty of Sciences, University of Moulay Ismail, Meknès 50070, Morocco
| | - Lahcen Koutti
- LABSIV, Computer Science, Faculty of Sciences, University Ibn Zohr, Agadir 80000, Morocco
| |
Collapse
|
9
|
Houwen BBSL, Nass KJ, Vleugels JLA, Fockens P, Hazewinkel Y, Dekker E. Comprehensive review of publicly available colonoscopic imaging databases for artificial intelligence research: availability, accessibility, and usability. Gastrointest Endosc 2023; 97:184-199.e16. [PMID: 36084720 DOI: 10.1016/j.gie.2022.08.043] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/24/2022] [Accepted: 08/30/2022] [Indexed: 01/28/2023]
Abstract
BACKGROUND AND AIMS Publicly available databases containing colonoscopic imaging data are valuable resources for artificial intelligence (AI) research. Currently, little is known regarding the available number and content of these databases. This review aimed to describe the availability, accessibility, and usability of publicly available colonoscopic imaging databases, focusing on polyp detection, polyp characterization, and quality of colonoscopy. METHODS A systematic literature search was performed in MEDLINE and Embase to identify AI studies describing publicly available colonoscopic imaging databases published after 2010. Second, a targeted search using Google's Dataset Search, Google Search, GitHub, and Figshare was done to identify databases directly. Databases were included if they contained data about polyp detection, polyp characterization, or quality of colonoscopy. To assess accessibility of databases, the following categories were defined: open access, open access with barriers, and regulated access. To assess the potential usability of the included databases, essential details of each database were extracted using a checklist derived from the Checklist for Artificial Intelligence in Medical Imaging. RESULTS We identified 22 databases with open access, 3 databases with open access with barriers, and 15 databases with regulated access. The 22 open access databases contained 19,463 images and 952 videos. Nineteen of these databases focused on polyp detection, localization, and/or segmentation; 6 on polyp characterization, and 3 on quality of colonoscopy. Only half of these databases have been used by other researcher to develop, train, or benchmark their AI system. Although technical details were in general well reported, important details such as polyp and patient demographics and the annotation process were under-reported in almost all databases. CONCLUSIONS This review provides greater insight on public availability of colonoscopic imaging databases for AI research. Incomplete reporting of important details limits the ability of researchers to assess the usability of current databases.
Collapse
Affiliation(s)
- Britt B S L Houwen
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Karlijn J Nass
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Jasper L A Vleugels
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Yark Hazewinkel
- Department of Gastroenterology and Hepatology, Radboud University Nijmegen Medical Center, Radboud University of Nijmegen, Nijmegen, the Netherlands
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
10
|
Krenzer A, Banck M, Makowski K, Hekalo A, Fitting D, Troya J, Sudarevic B, Zoller WG, Hann A, Puppe F. A Real-Time Polyp-Detection System with Clinical Application in Colonoscopy Using Deep Convolutional Neural Networks. J Imaging 2023; 9:jimaging9020026. [PMID: 36826945 PMCID: PMC9967208 DOI: 10.3390/jimaging9020026] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 01/18/2023] [Accepted: 01/19/2023] [Indexed: 01/26/2023] Open
Abstract
Colorectal cancer (CRC) is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is with a colonoscopy. During this procedure, the gastroenterologist searches for polyps. However, there is a potential risk of polyps being missed by the gastroenterologist. Automated detection of polyps helps to assist the gastroenterologist during a colonoscopy. There are already publications examining the problem of polyp detection in the literature. Nevertheless, most of these systems are only used in the research context and are not implemented for clinical application. Therefore, we introduce the first fully open-source automated polyp-detection system scoring best on current benchmark data and implementing it ready for clinical application. To create the polyp-detection system (ENDOMIND-Advanced), we combined our own collected data from different hospitals and practices in Germany with open-source datasets to create a dataset with over 500,000 annotated images. ENDOMIND-Advanced leverages a post-processing technique based on video detection to work in real-time with a stream of images. It is integrated into a prototype ready for application in clinical interventions. We achieve better performance compared to the best system in the literature and score a F1-score of 90.24% on the open-source CVC-VideoClinicDB benchmark.
Collapse
Affiliation(s)
- Adrian Krenzer
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070 Würzburg, Germany
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080 Würzburg, Germany
| | - Michael Banck
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070 Würzburg, Germany
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080 Würzburg, Germany
| | - Kevin Makowski
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070 Würzburg, Germany
| | - Amar Hekalo
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070 Würzburg, Germany
| | - Daniel Fitting
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080 Würzburg, Germany
| | - Joel Troya
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080 Würzburg, Germany
| | - Boban Sudarevic
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080 Würzburg, Germany
- Department of Internal Medicine and Gastroenterology, Katharinenhospital, Kriegsbergstrasse 60, 70174 Stuttgart, Germany
| | - Wolfgang G Zoller
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080 Würzburg, Germany
- Department of Internal Medicine and Gastroenterology, Katharinenhospital, Kriegsbergstrasse 60, 70174 Stuttgart, Germany
| | - Alexander Hann
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080 Würzburg, Germany
| | - Frank Puppe
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070 Würzburg, Germany
| |
Collapse
|
11
|
ELKarazle K, Raman V, Then P, Chua C. Detection of Colorectal Polyps from Colonoscopy Using Machine Learning: A Survey on Modern Techniques. SENSORS (BASEL, SWITZERLAND) 2023; 23:1225. [PMID: 36772263 PMCID: PMC9953705 DOI: 10.3390/s23031225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 01/08/2023] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
Given the increased interest in utilizing artificial intelligence as an assistive tool in the medical sector, colorectal polyp detection and classification using deep learning techniques has been an active area of research in recent years. The motivation for researching this topic is that physicians miss polyps from time to time due to fatigue and lack of experience carrying out the procedure. Unidentified polyps can cause further complications and ultimately lead to colorectal cancer (CRC), one of the leading causes of cancer mortality. Although various techniques have been presented recently, several key issues, such as the lack of enough training data, white light reflection, and blur affect the performance of such methods. This paper presents a survey on recently proposed methods for detecting polyps from colonoscopy. The survey covers benchmark dataset analysis, evaluation metrics, common challenges, standard methods of building polyp detectors and a review of the latest work in the literature. We conclude this paper by providing a precise analysis of the gaps and trends discovered in the reviewed literature for future work.
Collapse
Affiliation(s)
- Khaled ELKarazle
- School of Information and Communication Technologies, Swinburne University of Technology, Sarawak Campus, Kuching 93350, Malaysia
| | - Valliappan Raman
- Department of Artificial Intelligence and Data Science, Coimbatore Institute of Technology, Coimbatore 641014, India
| | - Patrick Then
- School of Information and Communication Technologies, Swinburne University of Technology, Sarawak Campus, Kuching 93350, Malaysia
| | - Caslon Chua
- Department of Computer Science and Software Engineering, Swinburne University of Technology, Melbourne 3122, Australia
| |
Collapse
|
12
|
Nisha JS, Gopi VP, Palanisamy P. COLORECTAL POLYP DETECTION USING IMAGE ENHANCEMENT AND SCALED YOLOv4 ALGORITHM. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2022; 34. [DOI: 10.4015/s1016237222500260] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2025]
Abstract
Colorectal cancer (CRC) is the common cancer-related cause of death globally. It is now the third leading cause of cancer-related mortality worldwide. As the number of instances of colorectal polyps rises, it is more important than ever to identify and diagnose them early. Object detection models have recently become popular for extracting highly representative features. Colonoscopy is shown to be a useful diagnostic procedure for examining anomalies in the digestive system’s bottom half. This research presents a novel image-enhancing approach followed by a Scaled YOLOv4 Network for the early diagnosis of polyps, lowering the high risk of CRC therapy. The proposed network is trained using the CVC ClinicDB and the CVC ColonDB and the Etis Larib database are used for testing. On the CVC ColonDB database, the performance metrics are precision (95.13%), recall (74.92%), F1-score (83.19%), and F2-score (89.89%). On the ETIS Larib database, the performance metrics are precision (94.30%), recall (77.30%), F1-score (84.90%), and F2-score (80.20%). On both the databases, the suggested methodology outperforms the present one in terms of F1-score, F2-score, and precision compared to the futuristic method. The proposed Yolo object identification model provides an accurate polyp detection strategy in a real-time application.
Collapse
Affiliation(s)
- J. S. Nisha
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| | - Varun P. Gopi
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| | - P. Palanisamy
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| |
Collapse
|
13
|
Yin TK, Huang KL, Chiu SR, Yang YQ, Chang BR. Endoscopy Artefact Detection by Deep Transfer Learning of Baseline Models. J Digit Imaging 2022; 35:1101-1110. [PMID: 35478060 PMCID: PMC9582060 DOI: 10.1007/s10278-022-00627-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 03/28/2022] [Accepted: 03/30/2022] [Indexed: 10/18/2022] Open
Abstract
To visualise the tumours inside the body on a screen, a long and thin tube is inserted with a light source and a camera at the tip to obtain video frames inside organs in endoscopy. However, multiple artefacts exist in these video frames that cause difficulty during the diagnosis of cancers. In this research, deep learning was applied to detect eight kinds of artefacts: specularity, bubbles, saturation, contrast, blood, instrument, blur, and imaging artefacts. Based on transfer learning with pre-trained parameters and fine-tuning, two state-of-the-art methods were applied for detection: faster region-based convolutional neural networks (Faster R-CNN) and EfficientDet. Experiments were implemented on the grand challenge dataset, Endoscopy Artefact Detection and Segmentation (EAD2020). To validate our approach in this study, we used phase I of 2,200 frames and phase II of 331 frames in the original training dataset with ground-truth annotations as training and testing dataset, respectively. Among the tested methods, EfficientDet-D2 achieves a score of 0.2008 (mAPd[Formula: see text]0.6+mIoUd[Formula: see text]0.4) on the dataset that is better than three other baselines: Faster-RCNN, YOLOv3, and RetinaNet, and competitive to the best non-baseline result scored 0.25123 on the leaderboard although our testing was on phase II of 331 frames instead of the original 200 testing frames. Without extra improvement techniques beyond basic neural networks such as test-time augmentation, we showed that a simple baseline could achieve state-of-the-art performance in detecting artefacts in endoscopy. In conclusion, we proposed the combination of EfficientDet-D2 with suitable data augmentation and pre-trained parameters during fine-tuning training to detect the artefacts in endoscopy.
Collapse
Affiliation(s)
- Tang-Kai Yin
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan.
| | - Kai-Lun Huang
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| | - Si-Rong Chiu
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| | - Yu-Qi Yang
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| | - Bao-Rong Chang
- Department of Computer Science and Information Engineering, National University of Kaohsiung, No. 700, Kaohsiung University Rd., Nan-Tzu Dist., 811, Kaohsiung, Taiwan
| |
Collapse
|
14
|
Souaidi M, El Ansari M. Multi-Scale Hybrid Network for Polyp Detection in Wireless Capsule Endoscopy and Colonoscopy Images. Diagnostics (Basel) 2022; 12:2030. [PMID: 36010380 PMCID: PMC9407378 DOI: 10.3390/diagnostics12082030] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 08/15/2022] [Accepted: 08/17/2022] [Indexed: 11/17/2022] Open
Abstract
The trade-off between speed and precision is a key step in the detection of small polyps in wireless capsule endoscopy (WCE) images. In this paper, we propose a hybrid network of an inception v4 architecture-based single-shot multibox detector (Hyb-SSDNet) to detect small polyp regions in both WCE and colonoscopy frames. Medical privacy concerns are considered the main barriers to WCE image acquisition. To satisfy the object detection requirements, we enlarged the training datasets and investigated deep transfer learning techniques. The Hyb-SSDNet framework adopts inception blocks to alleviate the inherent limitations of the convolution operation to incorporate contextual features and semantic information into deep networks. It consists of four main components: (a) multi-scale encoding of small polyp regions, (b) using the inception v4 backbone to enhance more contextual features in shallow and middle layers, and (c) concatenating weighted features of mid-level feature maps, giving them more importance to highly extract semantic information. Then, the feature map fusion is delivered to the next layer, followed by some downsampling blocks to generate new pyramidal layers. Finally, the feature maps are fed to multibox detectors, consistent with the SSD process-based VGG16 network. The Hyb-SSDNet achieved a 93.29% mean average precision (mAP) and a testing speed of 44.5 FPS on the WCE dataset. This work proves that deep learning has the potential to develop future research in polyp detection and classification tasks.
Collapse
Affiliation(s)
- Meryem Souaidi
- LABSIV, Computer Science, Faculty of Sciences, University Ibn Zohr, Agadir 80000, Morocco
| | - Mohamed El Ansari
- LABSIV, Computer Science, Faculty of Sciences, University Ibn Zohr, Agadir 80000, Morocco
- Informatics and Applications Laboratory, Computer Science Department, Faculty of Sciences, University of Moulay Ismail, Meknès 50070, Morocco
| |
Collapse
|
15
|
Li X, Zhou Y, Yin H, Wang Z, Zhuo L, Zhang H. Detecting Absence of Bone Wall in Jugular Bulb by Image Transformation Surrogate Tasks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1358-1370. [PMID: 34971529 DOI: 10.1109/tmi.2021.3139917] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Anomaly detection in medical images is important in computer-aided diagnosis. It is a challenging task due to limited anomaly data, sample imbalance, and local differences between the normal and abnormal patterns. Abnormal manifestations in medical images have a definite clinical definition and descriptions, which can be introduced to improve the accuracy of detection rate. In this paper, we propose an anomaly detection method via image transformation surrogate tasks and apply it to detect the absence of bone wall in jugular bulb of temporal bone CT images. First, we design a pair of contrastive surrogate tasks, including an abnormal region completion and a normal background erasure, to decouple the similarity of the normal and abnormal examples. Then, image synthesis strategies for the surrogate tasks are designed, which alleviates the problem of limited abnormal data. Further, an abnormal scoring module is proposed, which includes MSE, SSIM, and local error intensity, to fuse the results of the surrogate tasks. We verify the effectiveness of our proposed method on the jugular bulb data set and experimental results show that the accuracy of our method is 0.995 and the AUC (Area Under the Curve) is 0.994.
Collapse
|
16
|
Polyp Segmentation Network with Hybrid. Channel-Spatial Attention and Pyramid Global Context Guided Feature Fusion. Comput Med Imaging Graph 2022; 98:102072. [DOI: 10.1016/j.compmedimag.2022.102072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 04/28/2022] [Accepted: 04/28/2022] [Indexed: 11/17/2022]
|
17
|
Ayidzoe MA, Yu Y, Mensah PK, Cai J, Baagyere EY, Bawah FU. SinoCaps: Recognition of colorectal polyps using sinogram capsule network. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-212168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Colorectal cancer is the third most diagnosed malignancy in the world. Polyps (either malignant or benign) are the primary cause of colorectal cancer. However, the diagnosis is susceptive to human error, less effective, and falls below recommended levels in routine clinical procedures. In this paper, a Capsule network enhanced with radon transforms for feature extraction is proposed to improve the feasibility of colorectal cancer recognition. The contribution of this paper lies in the incorporation of the radon transforms in the proposed model to improve the detection of polyps by performing efficient extraction of tomographic features. When trained and tested with the polyp dataset, the proposed model achieved an overall average recognition accuracy of 94.02%, AUC of 97%, and an average precision of 96% . In addition, a posthoc analysis of the results exhibited superior feature extraction capabilities comparable to the state-of-the-art and can contribute to the field of explainable artificial intelligence. The proposed method has a considerable potential to be adopted in clinical trials to eliminate the problems associated with the human diagnosis of colorectal cancer.
Collapse
Affiliation(s)
- Mighty Abra Ayidzoe
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, P.R. China
- Department of Computer Science and Informatics, University of Energy and Natural Resources, Sunyani, Ghana
| | - Yongbin Yu
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, P.R. China
| | - Patrick Kwabena Mensah
- Department of Computer Science and Informatics, University of Energy and Natural Resources, Sunyani, Ghana
| | - Jingye Cai
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, P.R. China
| | - Edward Yellakuor Baagyere
- Department of Computer Science, Faculty of Mathematical Sciences, CK Tedam University of Technology and Applied Sciences, Navrongo, Ghana
| | - Faiza Umar Bawah
- Department of Computer Science and Informatics, University of Energy and Natural Resources, Sunyani, Ghana
| |
Collapse
|
18
|
Wang X, Liu J, Liu X, Liu Z, Khalaf OI, Ji J, Ouyang Q. Ship feature recognition methods for deep learning in complex marine environments. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00683-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
AbstractWith the advancement of edge computing, the computing power that was originally located in the center is deployed closer to the terminal, which directly accelerates the iteration speed of the "sensing-communication-decision-feedback" chain in the complex marine environments, including ship avoidance. The increase in sensor equipment, such as cameras, have also accelerated the speed of ship identification technology based on feature detection in the maritime field. Based on the SSD framework, this article proposes a deep learning model called DP-SSD. By adjusting the size of the detection frame, different feature parameters can be detected. Through actual data learning and testing, it is compatible with Faster RCNN, SSD and other classic algorithms. It was found that the proposed method provided high-quality results in terms of the calculation time, the processed frame rate, and the recognition accuracy. As an important part of future smart ships, this method has theoretical value and an influence on engineering.
Collapse
|
19
|
Ragab M, Eljaaly K, Farouk S. Sabir M, Bahaudien Ashary E, M. Abo-Dahab S, M. Khalil E. Optimized Deep Learning Model for Colorectal Cancer Detection and Classification Model. COMPUTERS, MATERIALS & CONTINUA 2022; 71:5751-5764. [DOI: 10.32604/cmc.2022.024658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 12/08/2021] [Indexed: 10/28/2024]
|
20
|
Li JW, Chia T, Fock KM, Chong KDW, Wong YJ, Ang TL. Artificial intelligence and polyp detection in colonoscopy: Use of a single neural network to achieve rapid polyp localization for clinical use. J Gastroenterol Hepatol 2021; 36:3298-3307. [PMID: 34327729 DOI: 10.1111/jgh.15642] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 05/11/2021] [Accepted: 07/22/2021] [Indexed: 02/05/2023]
Abstract
BACKGROUND AND AIM Artificial intelligence has been extensively studied to assist clinicians in polyp detection, but such systems usually require expansive processing power, making them prohibitively expensive and hindering wide adaption. The current study used a fast object detection algorithm, known as the YOLOv3 algorithm, to achieve real-time polyp detection on a laptop. In addition, we evaluated and classified the causes of false detections to further improve accuracy. METHODS The YOLOv3 algorithm was trained and validated with 6038 and 2571 polyp images, respectively. Videos from live colonoscopies in a tertiary center and those obtained from public databases were used for the training and validation sets. The algorithm was tested on 10 unseen videos from the CVC-Video ClinicDB dataset. Only bounding boxes with an intersection over union area of > 0.3 were considered positive predictions. RESULTS Polyp detection rate in our study was 100%, with the algorithm able to detect every polyp in each video. Sensitivity, specificity, and F1 score were 74.1%, 85.1%, and 83.3, respectively. The algorithm achieved a speed of 61.2 frames per second (fps) on a desktop RTX2070 GPU and 27.2 fps on a laptop GTX2060 GPU. Nearly a quarter of false negatives happened when the polyps were at the corner of an image. Image blurriness accounted for approximately 3% and 9% of false positive and false negative detections, respectively. CONCLUSION The YOLOv3 algorithm can achieve real-time poly detection with high accuracy and speed on a desktop GPU, making it low cost and accessible to most endoscopy centers worldwide.
Collapse
Affiliation(s)
- James Weiquan Li
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Medicine Academic Clinical Programme, SingHealth Duke-NUS, Singapore
| | | | - Kwong Ming Fock
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Medicine Academic Clinical Programme, SingHealth Duke-NUS, Singapore
| | | | - Yu Jun Wong
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore
- Medicine Academic Clinical Programme, SingHealth Duke-NUS, Singapore
| | - Tiing Leong Ang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Medicine Academic Clinical Programme, SingHealth Duke-NUS, Singapore
| |
Collapse
|
21
|
Multi-Class Parrot Image Classification Including Subspecies with Similar Appearance. BIOLOGY 2021; 10:biology10111140. [PMID: 34827133 PMCID: PMC8615015 DOI: 10.3390/biology10111140] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 10/28/2021] [Accepted: 11/04/2021] [Indexed: 11/17/2022]
Abstract
Simple Summary Owing to climate change and human overdevelopment, the number of endangered species has been increasing. To face this challenge, the CITES treaty has been adopted by many countries worldwide to prevent the extinction of endangered plants and animals. Additionally, since customs clearance inspections for goods at airports and ports take a long time, and due to the difficulty of distinguishing such species by nonexperts, smugglers have been exploiting this vulnerability to illegally import or export endangered parrot species. If these cases continue to increase, the extinction of species with fewer populations can be accelerated by illegal trade. To tackle this problem, in this study, we constructed an object detection model using convolutional neural networks (CNNs) to classify 11 endangered species of parrots. Utilizing artificial intelligence techniques, the procedures for inspection of goods can be simplified and the customs clearance inspection systems at airports and ports can be enhanced, thus protecting endangered species. Abstract Owing to climate change and human indiscriminate development, the population of endangered species has been decreasing. To protect endangered species, many countries worldwide have adopted the CITES treaty to prevent the extinction of endangered plants and animals. Moreover, research has been conducted using diverse approaches, particularly deep learning-based animal and plant image recognition methods. In this paper, we propose an automated image classification method for 11 endangered parrot species included in CITES. The 11 species include subspecies that are very similar in appearance. Data images were collected from the Internet and built in cooperation with Seoul Grand Park Zoo to build an indigenous database. The dataset for deep learning training consisted of 70% training set, 15% validation set, and 15% test set. In addition, a data augmentation technique was applied to reduce the data collection limit and prevent overfitting. The performance of various backbone CNN architectures (i.e., VGGNet, ResNet, and DenseNet) were compared using the SSD model. The experiment derived the test set image performance for the training model, and the results show that the DenseNet18 had the best performance with an mAP of approximately 96.6% and an inference time of 0.38 s.
Collapse
|
22
|
Chen BL, Wan JJ, Chen TY, Yu YT, Ji M. A self-attention based faster R-CNN for polyp detection from colonoscopy images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
|
23
|
Automatic Polyp Segmentation in Colonoscopy Images Using a Modified Deep Convolutional Encoder-Decoder Architecture. SENSORS 2021; 21:s21165630. [PMID: 34451072 PMCID: PMC8402594 DOI: 10.3390/s21165630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 08/07/2021] [Accepted: 08/19/2021] [Indexed: 11/25/2022]
Abstract
Colorectal cancer has become the third most commonly diagnosed form of cancer, and has the second highest fatality rate of cancers worldwide. Currently, optical colonoscopy is the preferred tool of choice for the diagnosis of polyps and to avert colorectal cancer. Colon screening is time-consuming and highly operator dependent. In view of this, a computer-aided diagnosis (CAD) method needs to be developed for the automatic segmentation of polyps in colonoscopy images. This paper proposes a modified SegNet Visual Geometry Group-19 (VGG-19), a form of convolutional neural network, as a CAD method for polyp segmentation. The modifications include skip connections, 5 × 5 convolutional filters, and the concatenation of four dilated convolutions applied in parallel form. The CVC-ClinicDB, CVC-ColonDB, and ETIS-LaribPolypDB databases were used to evaluate the model, and it was found that our proposed polyp segmentation model achieved an accuracy, sensitivity, specificity, precision, mean intersection over union, and dice coefficient of 96.06%, 94.55%, 97.56%, 97.48%, 92.3%, and 95.99%, respectively. These results indicate that our model performs as well as or better than previous schemes in the literature. We believe that this study will offer benefits in terms of the future development of CAD tools for polyp segmentation for colorectal cancer diagnosis and management. In the future, we intend to embed our proposed network into a medical capsule robot for practical usage and try it in a hospital setting with clinicians.
Collapse
|
24
|
Liew WS, Tang TB, Lin CH, Lu CK. Automatic colonic polyp detection using integration of modified deep residual convolutional neural network and ensemble learning approaches. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 206:106114. [PMID: 33984661 DOI: 10.1016/j.cmpb.2021.106114] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 04/07/2021] [Indexed: 05/10/2023]
Abstract
BACKGROUND AND OBJECTIVE The increased incidence of colorectal cancer (CRC) and its mortality rate have attracted interest in the use of artificial intelligence (AI) based computer-aided diagnosis (CAD) tools to detect polyps at an early stage. Although these CAD tools have thus far achieved a good accuracy level to detect polyps, they still have room to improve further (e.g. sensitivity). Therefore, a new CAD tool is developed in this study to detect colonic polyps accurately. METHODS In this paper, we propose a novel approach to distinguish colonic polyps by integrating several techniques, including a modified deep residual network, principal component analysis and AdaBoost ensemble learning. A powerful deep residual network architecture, ResNet-50, was investigated to reduce the computational time by altering its architecture. To keep the interference to a minimum, median filter, image thresholding, contrast enhancement, and normalisation techniques were exploited on the endoscopic images to train the classification model. Three publicly available datasets, i.e., Kvasir, ETIS-LaribPolypDB, and CVC-ClinicDB, were merged to train the model, which included images with and without polyps. RESULTS The proposed approach trained with a combination of three datasets achieved Matthews Correlation Coefficient (MCC) of 0.9819 with accuracy, sensitivity, precision, and specificity of 99.10%, 98.82%, 99.37%, and 99.38%, respectively. CONCLUSIONS These results show that our method could repeatedly classify endoscopic images automatically and could be used to effectively develop computer-aided diagnostic tools for early CRC detection.
Collapse
Affiliation(s)
- Win Sheng Liew
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia
| | - Tong Boon Tang
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia
| | - Cheng-Hung Lin
- Department of Electrical Engineering and Biomedical Engineering Research Center, Yuan Ze University, Jungli 32003, Taiwan
| | - Cheng-Kai Lu
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia.
| |
Collapse
|
25
|
Cao C, Wang R, Yu Y, zhang H, Yu Y, Sun C. Gastric polyp detection in gastroscopic images using deep neural network. PLoS One 2021; 16:e0250632. [PMID: 33909671 PMCID: PMC8081222 DOI: 10.1371/journal.pone.0250632] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Accepted: 04/08/2021] [Indexed: 12/26/2022] Open
Abstract
This paper presents the research results of detecting gastric polyps with deep learning object detection method in gastroscopic images. Gastric polyps have various sizes. The difficulty of polyp detection is that small polyps are difficult to detect from the background. We propose a feature extraction and fusion module and combine it with the YOLOv3 network to form our network. This method performs better than other methods in the detection of small polyps because it can fuse the semantic information of high-level feature maps with low-level feature maps to help small polyps detection. In this work, we use a dataset of gastric polyps created by ourselves, containing 1433 training images and 508 validation images. We train and validate our network on our dataset. In comparison with other methods of polyps detection, our method has a significant improvement in precision, recall rate, F1, and F2 score. The precision, recall rate, F1 score, and F2 score of our method can achieve 91.6%, 86.2%, 88.8%, and 87.2%.
Collapse
Affiliation(s)
- Chanting Cao
- Beijing Engineering Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Ruilin Wang
- Beijing Engineering Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Yao Yu
- Beijing Engineering Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
- * E-mail:
| | - Hui zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Ying Yu
- Beijing An Zhen Hospital, Beijing, China
| | - Changyin Sun
- School of Automation, Southeast University, Nanjing, China
| |
Collapse
|
26
|
Liu X, Guo X, Liu Y, Yuan Y. Consolidated domain adaptive detection and localization framework for cross-device colonoscopic images. Med Image Anal 2021; 71:102052. [PMID: 33895616 DOI: 10.1016/j.media.2021.102052] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Revised: 02/19/2021] [Accepted: 03/22/2021] [Indexed: 12/17/2022]
Abstract
Automatic polyp detection has been proven to be crucial in improving the diagnosis accuracy and reducing colorectal cancer mortality during the precancerous stage. However, the performance of deep neural networks may degrade severely when being deployed to polyp data in a distinct domain. This domain distinction can be caused by different scanners, hospitals, or imaging protocols. In this paper, we propose a consolidated domain adaptive detection and localization framework to bridge the domain gap between different colonosopic datasets effectively, consisting of two parts: the pixel-level adaptation and the hierarchical feature-level adaptation. For the pixel-level adaptation part, we propose a Gaussian Fourier Domain Adaptation (GFDA) method to sample the matched source and target image pairs from Gaussian distributions then unify their styles via the low-level spectrum replacement, which can reduce the domain discrepancy of the cross-device polyp datasets in appearance level without distorting their contents. The hierarchical feature-level adaptation part comprising a Hierarchical Attentive Adaptation (HAA) module to minimize the domain discrepancy in high-level semantics and an Iconic Concentrative Adaptation (ICA) module to perform reliable instance alignment. These two modules are regularized by a Generalized Consistency Regularizer (GCR) for maintaining the consistency of their domain predictions. We further extend our framework to the polyp localization task and present a Centre Besiegement (CB) loss for better location optimization. Experimental results show that our framework outperforms other domain adaptation detectors by a large margin in the detection task meanwhile achieves the state-of-the-art recall rate of 87.5% in the localization task. The source code is available at https://github.com/CityU-AIM-Group/ConsolidatedPolypDA.
Collapse
Affiliation(s)
- Xinyu Liu
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
| | - Xiaoqing Guo
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
| | - Yajie Liu
- Department of Radiation Oncology, Peking University Shenzhen Hospital, Shenzhen, China
| | - Yixuan Yuan
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
27
|
Abdurahman F, Fante KA, Aliy M. Malaria parasite detection in thick blood smear microscopic images using modified YOLOV3 and YOLOV4 models. BMC Bioinformatics 2021; 22:112. [PMID: 33685401 PMCID: PMC7938584 DOI: 10.1186/s12859-021-04036-4] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Accepted: 02/18/2021] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND Manual microscopic examination of Leishman/Giemsa stained thin and thick blood smear is still the "gold standard" for malaria diagnosis. One of the drawbacks of this method is that its accuracy, consistency, and diagnosis speed depend on microscopists' diagnostic and technical skills. It is difficult to get highly skilled microscopists in remote areas of developing countries. To alleviate this problem, in this paper, we propose to investigate state-of-the-art one-stage and two-stage object detection algorithms for automated malaria parasite screening from microscopic image of thick blood slides. RESULTS YOLOV3 and YOLOV4 models, which are state-of-the-art object detectors in accuracy and speed, are not optimized for detecting small objects such as malaria parasites in microscopic images. We modify these models by increasing feature scale and adding more detection layers to enhance their capability of detecting small objects without notably decreasing detection speed. We propose one modified YOLOV4 model, called YOLOV4-MOD and two modified models of YOLOV3, which are called YOLOV3-MOD1 and YOLOV3-MOD2. Besides, new anchor box sizes are generated using K-means clustering algorithm to exploit the potential of these models in small object detection. The performance of the modified YOLOV3 and YOLOV4 models were evaluated on a publicly available malaria dataset. These models have achieved state-of-the-art accuracy by exceeding performance of their original versions, Faster R-CNN, and SSD in terms of mean average precision (mAP), recall, precision, F1 score, and average IOU. YOLOV4-MOD has achieved the best detection accuracy among all the other models with a mAP of 96.32%. YOLOV3-MOD2 and YOLOV3-MOD1 have achieved mAP of 96.14% and 95.46%, respectively. CONCLUSIONS The experimental results of this study demonstrate that performance of modified YOLOV3 and YOLOV4 models are highly promising for detecting malaria parasites from images captured by a smartphone camera over the microscope eyepiece. The proposed system is suitable for deployment in low-resource setting areas.
Collapse
Affiliation(s)
- Fetulhak Abdurahman
- Faculty of Electrical and Computer Engineering, Jimma Institute of Technology, Jimma University, 378, Jimma, Ethiopia
| | - Kinde Anlay Fante
- Faculty of Electrical and Computer Engineering, Jimma Institute of Technology, Jimma University, 378, Jimma, Ethiopia
| | - Mohammed Aliy
- School of Biomedical Engineering, Jimma Institute of Technology, Jimma University, 378, Jimma, Ethiopia
| |
Collapse
|
28
|
Qadir HA, Shin Y, Solhusvik J, Bergsland J, Aabakken L, Balasingham I. Toward real-time polyp detection using fully CNNs for 2D Gaussian shapes prediction. Med Image Anal 2020; 68:101897. [PMID: 33260111 DOI: 10.1016/j.media.2020.101897] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Revised: 10/26/2020] [Accepted: 10/28/2020] [Indexed: 12/24/2022]
Abstract
To decrease colon polyp miss-rate during colonoscopy, a real-time detection system with high accuracy is needed. Recently, there have been many efforts to develop models for real-time polyp detection, but work is still required to develop real-time detection algorithms with reliable results. We use single-shot feed-forward fully convolutional neural networks (F-CNN) to develop an accurate real-time polyp detection system. F-CNNs are usually trained on binary masks for object segmentation. We propose the use of 2D Gaussian masks instead of binary masks to enable these models to detect different types of polyps more effectively and efficiently and reduce the number of false positives. The experimental results showed that the proposed 2D Gaussian masks are efficient for detection of flat and small polyps with unclear boundaries between background and polyp parts. The masks make a better training effect to discriminate polyps from the polyp-like false positives. The proposed method achieved state-of-the-art results on two polyp datasets. On the ETIS-LARIB dataset we achieved 86.54% recall, 86.12% precision, and 86.33% F1-score, and on the CVC-ColonDB we achieved 91% recall, 88.35% precision, and F1-score 89.65%.
Collapse
Affiliation(s)
- Hemin Ali Qadir
- Intervention Centre, Oslo University Hospital, Oslo, Norway; Department of Informatics, University of Oslo, Oslo, Norway; OmniVision Technologies Norway AS, Oslo, Norway.
| | - Younghak Shin
- Department of Computer Engineering, Mokpo National University, Mokpo, Korea.
| | | | | | - Lars Aabakken
- Department of Transplantation Medicine, University of Oslo, Oslo, Norway
| | - Ilangko Balasingham
- Intervention Centre, Oslo University Hospital, Oslo, Norway; Department of Electronic Systems, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
29
|
Pacal I, Karaboga D, Basturk A, Akay B, Nalbantoglu U. A comprehensive review of deep learning in colon cancer. Comput Biol Med 2020; 126:104003. [DOI: 10.1016/j.compbiomed.2020.104003] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 08/28/2020] [Accepted: 08/28/2020] [Indexed: 12/17/2022]
|
30
|
Hwang M, Wang D, Kong XX, Wang Z, Li J, Jiang WC, Hwang KS, Ding K. An automated detection system for colonoscopy images using a dual encoder-decoder model. Comput Med Imaging Graph 2020; 84:101763. [PMID: 32805673 DOI: 10.1016/j.compmedimag.2020.101763] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2019] [Revised: 07/10/2020] [Accepted: 07/10/2020] [Indexed: 02/07/2023]
Abstract
Conventional computer-aided detection systems (CADs) for colonoscopic images utilize shape, texture, or temporal information to detect polyps, so they have limited sensitivity and specificity. This study proposes a method to extract possible polyp features automatically using convolutional neural networks (CNNs). The objective of this work aims at building up a light-weight dual encoder-decoder model structure for polyp detection in colonoscopy Images. This proposed model, though with a relatively shallow structure, is expected to have the capability of a similar performance to the methods with much deeper structures. The proposed CAD model consists of two sequential encoder-decoder networks that consist of several CNN layers and full connection layers. The front end of the model is a hetero-associator (also known as hetero-encoder) that uses backpropagation learning to generate a set of reliably corrupted labeled images with a certain degree of similarity to a ground truth image, which eliminates the need for a large amount of training data that is usually required for medical images tasks. This dual CNN architecture generates a set of noisy images that are similar to the labeled data to train its counterpart, the auto-associator (also known as auto-encoder), in order to increase the successor's discriminative power in classification. The auto-encoder is also equipped with CNNs to simultaneously capture the features of the labeled images that contain noise. The proposed method uses features that are learned from open medical datasets and the dataset of Zhejiang University (ZJU), which contains around one thousand images. The performance of the proposed architecture is compared with a state-of-the-art detection model in terms of the metrics of the Jaccard index, the DICE similarity score, and two other geometric measures. The improvements in the performance of the proposed model are attributed to the effective reduction in false positives in the auto-encoder and the generation of noisy candidate images by the hetero-encoder.
Collapse
Affiliation(s)
- Maxwell Hwang
- Department of Colorectal Surgery, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China; Cancer Institute (Key Laboratory of Cancer Prevention and Intervention, China National Ministry of Education, Key Laboratory of Molecular Biology in Medical Sciences, Zhejiang Province, China; The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Da Wang
- Department of Colorectal Surgery, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China; Cancer Institute (Key Laboratory of Cancer Prevention and Intervention, China National Ministry of Education, Key Laboratory of Molecular Biology in Medical Sciences, Zhejiang Province, China; The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Xiang-Xing Kong
- Department of Colorectal Surgery, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China; Cancer Institute (Key Laboratory of Cancer Prevention and Intervention, China National Ministry of Education, Key Laboratory of Molecular Biology in Medical Sciences, Zhejiang Province, China; The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Zhanhuai Wang
- Department of Colorectal Surgery, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China; Cancer Institute (Key Laboratory of Cancer Prevention and Intervention, China National Ministry of Education, Key Laboratory of Molecular Biology in Medical Sciences, Zhejiang Province, China; The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Jun Li
- Department of Colorectal Surgery, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China; Cancer Institute (Key Laboratory of Cancer Prevention and Intervention, China National Ministry of Education, Key Laboratory of Molecular Biology in Medical Sciences, Zhejiang Province, China; The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Wei-Cheng Jiang
- Department of Electrical Engineering, Tunghai University, Taichung, Taiwan, China
| | - Kao-Shing Hwang
- Department of Electrical Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan, China
| | - Kefeng Ding
- Department of Colorectal Surgery, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China; Cancer Institute (Key Laboratory of Cancer Prevention and Intervention, China National Ministry of Education, Key Laboratory of Molecular Biology in Medical Sciences, Zhejiang Province, China; The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China.
| |
Collapse
|
31
|
Guo X, Yuan Y. Semi-supervised WCE image classification with adaptive aggregated attention. Med Image Anal 2020; 64:101733. [PMID: 32574987 DOI: 10.1016/j.media.2020.101733] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 04/01/2020] [Accepted: 05/22/2020] [Indexed: 02/08/2023]
Abstract
Accurate abnormality classification in Wireless Capsule Endoscopy (WCE) images is crucial for early gastrointestinal (GI) tract cancer diagnosis and treatment, while it remains challenging due to the limited annotated dataset, the huge intra-class variances and the high degree of inter-class similarities. To tackle these dilemmas, we propose a novel semi-supervised learning method with Adaptive Aggregated Attention (AAA) module for automatic WCE image classification. Firstly, a novel deformation field based image preprocessing strategy is proposed to remove the black background and circular boundaries in WCE images. Then we propose a synergic network to learn discriminative image features, consisting of two branches: an abnormal regions estimator (the first branch) and an abnormal information distiller (the second branch). The first branch utilizes the proposed AAA module to capture global dependencies and incorporate context information to highlight the most meaningful regions, while the second branch mainly focuses on these calculated attention regions for accurate and robust abnormality classification. Finally, these two branches are jointly optimized by minimizing the proposed discriminative angular (DA) loss and Jensen-Shannon divergence (JS) loss with labeled data as well as unlabeled data. Comprehensive experiments have been conducted on the public CAD-CAP WCE dataset. The proposed method achieves 93.17% overall accuracy in a fourfold cross-validation, verifying its effectiveness for WCE image classification. The source code is available at https://github.com/Guo-Xiaoqing/SSL_WCE.
Collapse
Affiliation(s)
- Xiaoqing Guo
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
| | - Yixuan Yuan
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
32
|
Sánchez-Montes C, Bernal J, García-Rodríguez A, Córdova H, Fernández-Esparrach G. Review of computational methods for the detection and classification of polyps in colonoscopy imaging. GASTROENTEROLOGIA Y HEPATOLOGIA 2020; 43:222-232. [PMID: 32143918 DOI: 10.1016/j.gastrohep.2019.11.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Accepted: 11/24/2019] [Indexed: 02/06/2023]
Abstract
Computer-aided diagnosis (CAD) is a tool with great potential to help endoscopists in the tasks of detecting and histologically classifying colorectal polyps. In recent years, different technologies have been described and their potential utility has been increasingly evidenced, which has generated great expectations among scientific societies. However, most of these works are retrospective and use images of different quality and characteristics which are analysed off line. This review aims to familiarise gastroenterologists with computational methods and the particularities of endoscopic imaging, which have an impact on image processing analysis. Finally, the publicly available image databases, needed to compare and confirm the results obtained with different methods, are presented.
Collapse
Affiliation(s)
- Cristina Sánchez-Montes
- Unidad de Endoscopia Digestiva, Hospital Universitari i Politècnic La Fe, Grupo de Investigación de Endoscopia Digestiva, IIS La Fe, Valencia, España
| | - Jorge Bernal
- Centro de Visión por Computador, Departamento de Ciencias de la Computación, Universidad Autónoma de Barcelona, Barcelona, España
| | - Ana García-Rodríguez
- Unidad de Endoscopia, Servicio de Gastroenterología, Hospital Clínic, IDIBAPS, CIBEREHD, Universidad de Barcelona, Barcelona, España
| | - Henry Córdova
- Unidad de Endoscopia, Servicio de Gastroenterología, Hospital Clínic, IDIBAPS, CIBEREHD, Universidad de Barcelona, Barcelona, España; IDIBAPS, CIBEREHD, Barcelona, España
| | - Gloria Fernández-Esparrach
- Unidad de Endoscopia, Servicio de Gastroenterología, Hospital Clínic, IDIBAPS, CIBEREHD, Universidad de Barcelona, Barcelona, España; IDIBAPS, CIBEREHD, Barcelona, España.
| |
Collapse
|