151
|
Luca M, Ciobanu A, Barbu T, Drug V. Artificial Intelligence and Deep Learning, Important Tools in Assisting Gastroenterologists. INTELLIGENT SYSTEMS REFERENCE LIBRARY 2022:197-213. [DOI: 10.1007/978-3-030-79161-2_8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
152
|
Strümke I, Hicks SA, Thambawita V, Jha D, Parasa S, Riegler MA, Halvorsen P. Artificial Intelligence in Gastroenterology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
153
|
Dougherty KE, Melkonian VJ, Montenegro GA. Artificial intelligence in polyp detection - where are we and where are we headed? Artif Intell Gastrointest Endosc 2021; 2:211-219. [DOI: 10.37126/aige.v2.i6.211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 07/02/2021] [Accepted: 11/18/2021] [Indexed: 02/06/2023] Open
Abstract
The goal of artificial intelligence in colonoscopy is to improve adenoma detection rate and reduce interval colorectal cancer. Artificial intelligence in polyp detection during colonoscopy has evolved tremendously over the last decade mainly due to the implementation of neural networks. Computer aided detection (CADe) utilizing neural networks allows real time detection of polyps and adenomas. Current CADe systems are built in single centers by multidisciplinary teams and have only been utilized in limited clinical research studies. We review the most recent prospective randomized controlled trials here. These randomized control trials, both non-blinded and blinded, demonstrated increase in adenoma and polyp detection rates when endoscopists used CADe systems vs standard high definition colonoscopes. Increase of polyps and adenomas detected were mainly small and sessile in nature. CADe systems were found to be safe with little added time to the overall procedure. Results are promising as more CADe have shown to have ability to increase accuracy and improve quality of colonoscopy. Overall limitations included selection bias as all trials built and utilized different CADe developed at their own institutions, non-blinded arms, and question of external validity.
Collapse
Affiliation(s)
- Kristen E Dougherty
- Department of Surgery, St. Louis University Hospital, Saint Louis, MO 63110, United States
| | - Vatche J Melkonian
- Department of Surgery, St. Louis University Hospital, Saint Louis, MO 63110, United States
| | - Grace A Montenegro
- Department of Surgery, St. Louis University Hospital, Saint Louis, MO 63110, United States
| |
Collapse
|
154
|
Medical Image Classification Based on Information Interaction Perception Mechanism. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:8429899. [PMID: 34912447 PMCID: PMC8668365 DOI: 10.1155/2021/8429899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 11/12/2021] [Indexed: 12/18/2022]
Abstract
Colorectal cancer originates from adenomatous polyps. Adenomatous polyps start out as benign, but over time they can become malignant and even lead to complications and death which will spread to adherent and surrounding organs over time, such as lymph nodes, liver, or lungs, eventually leading to complications and death. Factors such as operator's experience shortage and visual fatigue will directly affect the diagnostic accuracy of colonoscopy. To relieve the pressure on medical imaging personnel, this paper proposed a network model for colonic polyp detection using colonoscopy images. Considering the unnoticeable surface texture of colonic polyps, this paper designed a channel information interaction perception (CIIP) module. Based on this module, an information interaction perception network (IIP-Net) is proposed. In order to improve the accuracy of classification and reduce the cost of calculation, the network used three classifiers for classification: fully connected (FC) structure, global average pooling fully connected (GAP-FC) structure, and convolution global average pooling (C-GAP) structure. We evaluated the performance of IIP-Net by randomly selecting colonoscopy images from a gastroscopy database. The experimental results showed that the overall accuracy of IIP-NET54-GAP-FC module is 99.59%, and the accuracy of colonic polyp is 99.40%. By contrast, our IIP-NET54-GAP-FC performed extremely well.
Collapse
|
155
|
Pfeifer L, Neufert C, Leppkes M, Waldner MJ, Häfner M, Beyer A, Hoffman A, Siersema PD, Neurath MF, Rath T. Computer-aided detection of colorectal polyps using a newly generated deep convolutional neural network: from development to first clinical experience. Eur J Gastroenterol Hepatol 2021; 33:e662-e669. [PMID: 34034272 PMCID: PMC8734627 DOI: 10.1097/meg.0000000000002209] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Accepted: 02/22/2021] [Indexed: 12/27/2022]
Abstract
AIM The use of artificial intelligence represents an objective approach to increase endoscopist's adenoma detection rate (ADR) and limit interoperator variability. In this study, we evaluated a newly developed deep convolutional neural network (DCNN) for automated detection of colorectal polyps ex vivo as well as in a first in-human trial. METHODS For training of the DCNN, 116 529 colonoscopy images from 278 patients with 788 different polyps were collected. A subset of 10 467 images containing 504 different polyps were manually annotated and treated as the gold standard. An independent set of 45 videos consisting of 15 534 single frames was used for ex vivo performance testing. In vivo real-time detection of colorectal polyps during routine colonoscopy by the DCNN was tested in 42 patients in a back-to-back approach. RESULTS When analyzing the test set of 15 534 single frames, the DCNN's sensitivity and specificity for polyp detection and localization within the frame was 90% and 80%, respectively, with an area under the curve of 0.92. In vivo, baseline polyp detection rate and ADR were 38% and 26% and significantly increased to 50% (P = 0.023) and 36% (P = 0.044), respectively, with the use of the DCNN. Of the 13 additionally with the DCNN detected lesions, the majority were diminutive and flat, among them three sessile serrated adenomas. CONCLUSION This newly developed DCNN enables highly sensitive automated detection of colorectal polyps both ex vivo and during first in-human clinical testing and could potentially increase the detection of colorectal polyps during colonoscopy.
Collapse
Affiliation(s)
- Lukas Pfeifer
- Department of Internal Medicine 1, Division of Gastroenterology, Ludwig Demling Endoscopy Center of Excellence, Friedrich-Alexander-University, Erlangen-Nuernberg, Germany
| | - Clemens Neufert
- Department of Internal Medicine 1, Division of Gastroenterology, Ludwig Demling Endoscopy Center of Excellence, Friedrich-Alexander-University, Erlangen-Nuernberg, Germany
| | - Moritz Leppkes
- Department of Internal Medicine 1, Division of Gastroenterology, Ludwig Demling Endoscopy Center of Excellence, Friedrich-Alexander-University, Erlangen-Nuernberg, Germany
| | - Maximilian J. Waldner
- Department of Internal Medicine 1, Division of Gastroenterology, Ludwig Demling Endoscopy Center of Excellence, Friedrich-Alexander-University, Erlangen-Nuernberg, Germany
| | - Michael Häfner
- Department of Gastroenterology, Physiopathology and Endoscopy of the Gastrointestinal Tract, Central Hospital Bolzano, Bolzano, Italy
| | | | - Arthur Hoffman
- Department of Internal Medicine 3, Division of Gastroenterology, Klinikum Aschaffenburg, Aschaffenburg, Germany
| | - Peter D. Siersema
- Department of Gastroenterology and Hepatology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Markus F. Neurath
- Department of Internal Medicine 1, Division of Gastroenterology, Ludwig Demling Endoscopy Center of Excellence, Friedrich-Alexander-University, Erlangen-Nuernberg, Germany
| | - Timo Rath
- Department of Internal Medicine 1, Division of Gastroenterology, Ludwig Demling Endoscopy Center of Excellence, Friedrich-Alexander-University, Erlangen-Nuernberg, Germany
| |
Collapse
|
156
|
Viscaino M, Torres Bustos J, Muñoz P, Auat Cheein C, Cheein FA. Artificial intelligence for the early detection of colorectal cancer: A comprehensive review of its advantages and misconceptions. World J Gastroenterol 2021; 27:6399-6414. [PMID: 34720530 PMCID: PMC8517786 DOI: 10.3748/wjg.v27.i38.6399] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 04/26/2021] [Accepted: 09/14/2021] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer (CRC) was the second-ranked worldwide type of cancer during 2020 due to the crude mortality rate of 12.0 per 100000 inhabitants. It can be prevented if glandular tissue (adenomatous polyps) is detected early. Colonoscopy has been strongly recommended as a screening test for both early cancer and adenomatous polyps. However, it has some limitations that include the high polyp miss rate for smaller (< 10 mm) or flat polyps, which are easily missed during visual inspection. Due to the rapid advancement of technology, artificial intelligence (AI) has been a thriving area in different fields, including medicine. Particularly, in gastroenterology AI software has been included in computer-aided systems for diagnosis and to improve the assertiveness of automatic polyp detection and its classification as a preventive method for CRC. This article provides an overview of recent research focusing on AI tools and their applications in the early detection of CRC and adenomatous polyps, as well as an insightful analysis of the main advantages and misconceptions in the field.
Collapse
Affiliation(s)
- Michelle Viscaino
- Department of Electronic Engineering, Universidad Tecnica Federico Santa Maria, Valpaiso 2340000, Chile
| | - Javier Torres Bustos
- Department of Electronic Engineering, Universidad Tecnica Federico Santa Maria, Valpaiso 2340000, Chile
| | - Pablo Muñoz
- Hospital Clinico, University of Chile, Santiago 8380456, Chile
| | - Cecilia Auat Cheein
- Facultad de Medicina, Universidad Nacional de Santiago del Estero, Santiago del Estero 4200, Argentina
| | - Fernando Auat Cheein
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaiso 2340000, Chile
| |
Collapse
|
157
|
Tajbakhsh N, Roth H, Terzopoulos D, Liang J. Guest Editorial Annotation-Efficient Deep Learning: The Holy Grail of Medical Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2526-2533. [PMID: 34795461 PMCID: PMC8594751 DOI: 10.1109/tmi.2021.3089292] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Affiliation(s)
| | | | - Demetri Terzopoulos
- University of California, Los Angeles, and VoxelCloud, Inc., Los Angeles, CA, USA
| | | |
Collapse
|
158
|
Yeung M, Sala E, Schönlieb CB, Rundo L. Focus U-Net: A novel dual attention-gated CNN for polyp segmentation during colonoscopy. Comput Biol Med 2021; 137:104815. [PMID: 34507156 PMCID: PMC8505797 DOI: 10.1016/j.compbiomed.2021.104815] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 08/26/2021] [Accepted: 08/26/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Colonoscopy remains the gold-standard screening for colorectal cancer. However, significant miss rates for polyps have been reported, particularly when there are multiple small adenomas. This presents an opportunity to leverage computer-aided systems to support clinicians and reduce the number of polyps missed. METHOD In this work we introduce the Focus U-Net, a novel dual attention-gated deep neural network, which combines efficient spatial and channel-based attention into a single Focus Gate module to encourage selective learning of polyp features. The Focus U-Net incorporates several further architectural modifications, including the addition of short-range skip connections and deep supervision. Furthermore, we introduce the Hybrid Focal loss, a new compound loss function based on the Focal loss and Focal Tversky loss, designed to handle class-imbalanced image segmentation. For our experiments, we selected five public datasets containing images of polyps obtained during optical colonoscopy: CVC-ClinicDB, Kvasir-SEG, CVC-ColonDB, ETIS-Larib PolypDB and EndoScene test set. We first perform a series of ablation studies and then evaluate the Focus U-Net on the CVC-ClinicDB and Kvasir-SEG datasets separately, and on a combined dataset of all five public datasets. To evaluate model performance, we use the Dice similarity coefficient (DSC) and Intersection over Union (IoU) metrics. RESULTS Our model achieves state-of-the-art results for both CVC-ClinicDB and Kvasir-SEG, with a mean DSC of 0.941 and 0.910, respectively. When evaluated on a combination of five public polyp datasets, our model similarly achieves state-of-the-art results with a mean DSC of 0.878 and mean IoU of 0.809, a 14% and 15% improvement over the previous state-of-the-art results of 0.768 and 0.702, respectively. CONCLUSIONS This study shows the potential for deep learning to provide fast and accurate polyp segmentation results for use during colonoscopy. The Focus U-Net may be adapted for future use in newer non-invasive colorectal cancer screening and more broadly to other biomedical image segmentation tasks similarly involving class imbalance and requiring efficiency.
Collapse
Affiliation(s)
- Michael Yeung
- Department of Radiology, University of Cambridge, Cambridge, CB2 0QQ, United Kingdom; School of Clinical Medicine, University of Cambridge, Cambridge, CB2 0SP, United Kingdom.
| | - Evis Sala
- Department of Radiology, University of Cambridge, Cambridge, CB2 0QQ, United Kingdom; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, CB2 0RE, United Kingdom.
| | - Carola-Bibiane Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, CB3 0WA, United Kingdom.
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge, Cambridge, CB2 0QQ, United Kingdom; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, CB2 0RE, United Kingdom.
| |
Collapse
|
159
|
Tang Y, Anandasabapathy S, Richards‐Kortum R. Advances in optical gastrointestinal endoscopy: a technical review. Mol Oncol 2021; 15:2580-2599. [PMID: 32915503 PMCID: PMC8486567 DOI: 10.1002/1878-0261.12792] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 06/23/2020] [Accepted: 09/01/2020] [Indexed: 12/11/2022] Open
Abstract
Optical endoscopy is the primary diagnostic and therapeutic tool for management of gastrointestinal (GI) malignancies. Most GI neoplasms arise from precancerous lesions; thus, technical innovations to improve detection and diagnosis of precancerous lesions and early cancers play a pivotal role in improving outcomes. Over the last few decades, the field of GI endoscopy has witnessed enormous and focused efforts to develop and translate accurate, user-friendly, and minimally invasive optical imaging modalities. From a technical point of view, a wide range of novel optical techniques is now available to probe different aspects of light-tissue interaction at macroscopic and microscopic scales, complementing white light endoscopy. Most of these new modalities have been successfully validated and translated to routine clinical practice. Herein, we provide a technical review of the current status of existing and promising new optical endoscopic imaging technologies for GI cancer screening and surveillance. We summarize the underlying principles of light-tissue interaction, the imaging performance at different scales, and highlight what is known about clinical applicability and effectiveness. Furthermore, we discuss recent discovery and translation of novel molecular probes that have shown promise to augment endoscopists' ability to diagnose GI lesions with high specificity. We also review and discuss the role and potential clinical integration of artificial intelligence-based algorithms to provide decision support in real time. Finally, we provide perspectives on future technology development and its potential to transform endoscopic GI cancer detection and diagnosis.
Collapse
Affiliation(s)
- Yubo Tang
- Department of BioengineeringRice UniversityHoustonTXUSA
| | | | | |
Collapse
|
160
|
Nogueira-Rodríguez A, Domínguez-Carbajales R, Campos-Tato F, Herrero J, Puga M, Remedios D, Rivas L, Sánchez E, Iglesias Á, Cubiella J, Fdez-Riverola F, López-Fernández H, Reboiro-Jato M, Glez-Peña D. Real-time polyp detection model using convolutional neural networks. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06496-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
AbstractColorectal cancer is a major health problem, where advances towards computer-aided diagnosis (CAD) systems to assist the endoscopist can be a promising path to improvement. Here, a deep learning model for real-time polyp detection based on a pre-trained YOLOv3 (You Only Look Once) architecture and complemented with a post-processing step based on an object-tracking algorithm to reduce false positives is reported. The base YOLOv3 network was fine-tuned using a dataset composed of 28,576 images labelled with locations of 941 polyps that will be made public soon. In a frame-based evaluation using isolated images containing polyps, a general F1 score of 0.88 was achieved (recall = 0.87, precision = 0.89), with lower predictive performance in flat polyps, but higher for sessile, and pedunculated morphologies, as well as with the usage of narrow band imaging, whereas polyp size < 5 mm does not seem to have significant impact. In a polyp-based evaluation using polyp and normal mucosa videos, with a positive criterion defined as the presence of at least one 50-frames-length (window size) segment with a ratio of 75% of frames with predicted bounding boxes (frames positivity), 72.61% of sensitivity (95% CI 68.99–75.95) and 83.04% of specificity (95% CI 76.70–87.92) were achieved (Youden = 0.55, diagnostic odds ratio (DOR) = 12.98). When the positive criterion is less stringent (window size = 25, frames positivity = 50%), sensitivity reaches around 90% (sensitivity = 89.91%, 95% CI 87.20–91.94; specificity = 54.97%, 95% CI 47.49–62.24; Youden = 0.45; DOR = 10.76). The object-tracking algorithm has demonstrated a significant improvement in specificity whereas maintaining sensitivity, as well as a marginal impact on computational performance. These results suggest that the model could be effectively integrated into a CAD system.
Collapse
|
161
|
An optimal feature selection method for histopathology tissue image classification using adaptive jaya algorithm. EVOLUTIONARY INTELLIGENCE 2021. [DOI: 10.1007/s12065-019-00205-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
|
162
|
Chen BL, Wan JJ, Chen TY, Yu YT, Ji M. A self-attention based faster R-CNN for polyp detection from colonoscopy images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
|
163
|
Yoo BS, D'Souza SM, Houston K, Patel A, Lau J, Elmahdi A, Parekh PJ, Johnson D. Artificial intelligence and colonoscopy − enhancements and improvements. Artif Intell Gastrointest Endosc 2021; 2:157-167. [DOI: 10.37126/aige.v2.i4.157] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Revised: 06/21/2021] [Accepted: 07/23/2021] [Indexed: 02/06/2023] Open
|
164
|
Nazarian S, Glover B, Ashrafian H, Darzi A, Teare J. Diagnostic Accuracy of Artificial Intelligence and Computer-Aided Diagnosis for the Detection and Characterization of Colorectal Polyps: Systematic Review and Meta-analysis. J Med Internet Res 2021; 23:e27370. [PMID: 34259645 PMCID: PMC8319784 DOI: 10.2196/27370] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 03/09/2021] [Accepted: 05/06/2021] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Colonoscopy reduces the incidence of colorectal cancer (CRC) by allowing detection and resection of neoplastic polyps. Evidence shows that many small polyps are missed on a single colonoscopy. There has been a successful adoption of artificial intelligence (AI) technologies to tackle the issues around missed polyps and as tools to increase the adenoma detection rate (ADR). OBJECTIVE The aim of this review was to examine the diagnostic accuracy of AI-based technologies in assessing colorectal polyps. METHODS A comprehensive literature search was undertaken using the databases of Embase, MEDLINE, and the Cochrane Library. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines were followed. Studies reporting the use of computer-aided diagnosis for polyp detection or characterization during colonoscopy were included. Independent proportions and their differences were calculated and pooled through DerSimonian and Laird random-effects modeling. RESULTS A total of 48 studies were included. The meta-analysis showed a significant increase in pooled polyp detection rate in patients with the use of AI for polyp detection during colonoscopy compared with patients who had standard colonoscopy (odds ratio [OR] 1.75, 95% CI 1.56-1.96; P<.001). When comparing patients undergoing colonoscopy with the use of AI to those without, there was also a significant increase in ADR (OR 1.53, 95% CI 1.32-1.77; P<.001). CONCLUSIONS With the aid of machine learning, there is potential to improve ADR and, consequently, reduce the incidence of CRC. The current generation of AI-based systems demonstrate impressive accuracy for the detection and characterization of colorectal polyps. However, this is an evolving field and before its adoption into a clinical setting, AI systems must prove worthy to patients and clinicians. TRIAL REGISTRATION PROSPERO International Prospective Register of Systematic Reviews CRD42020169786; https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42020169786.
Collapse
Affiliation(s)
- Scarlet Nazarian
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Ben Glover
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Hutan Ashrafian
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Ara Darzi
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Julian Teare
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| |
Collapse
|
165
|
Rahim T, Hassan SA, Shin SY. A deep convolutional neural network for the detection of polyps in colonoscopy images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102654] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
166
|
Mankoo R, Ali AH, Hammoud GM. Use of artificial intelligence in endoscopic ultrasound evaluation of pancreatic pathologies. Artif Intell Gastrointest Endosc 2021; 2:89-94. [DOI: 10.37126/aige.v2.i3.89] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 06/20/2021] [Accepted: 06/28/2021] [Indexed: 02/06/2023] Open
|
167
|
Mankoo R, Ali AH, Hammoud GM. Use of artificial intelligence in endoscopic ultrasound evaluation of pancreatic pathologies. Artif Intell Gastrointest Endosc 2021; 2:88-93. [DOI: 10.37126/aige.v2.i3.88] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
|
168
|
Yalchin M, Baker AM, Graham TA, Hart A. Predicting Colorectal Cancer Occurrence in IBD. Cancers (Basel) 2021; 13:2908. [PMID: 34200768 PMCID: PMC8230430 DOI: 10.3390/cancers13122908] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 05/27/2021] [Accepted: 06/01/2021] [Indexed: 12/13/2022] Open
Abstract
Patients with colonic inflammatory bowel disease (IBD) are at an increased risk of developing colorectal cancer (CRC), and are therefore enrolled into a surveillance programme aimed at detecting dysplasia or early cancer. Current surveillance programmes are guided by clinical, endoscopic or histological predictors of colitis-associated CRC (CA-CRC). We have seen great progress in our understanding of these predictors of disease progression, and advances in endoscopic technique and management, along with improved medical care, has been mirrored by the falling incidence of CA-CRC over the last 50 years. However, more could be done to improve our molecular understanding of CA-CRC progression and enable better risk stratification for patients with IBD. This review summarises the known risk factors associated with CA-CRC and explores the molecular landscape that has the potential to complement and optimise the existing IBD surveillance programme.
Collapse
Affiliation(s)
- Mehmet Yalchin
- Inflammatory Bowel Disease Department, St. Mark’s Hospital, Watford R.d., Harrow HA1 3UJ, UK
- Centre for Genomics and Computational Biology, Barts Cancer Institute, Barts and the London School of Medicine and Dentistry, Queen Mary University of London, Charterhouse S.q., London EC1M 6BQ, UK; (A.-M.B.); (T.A.G.)
| | - Ann-Marie Baker
- Centre for Genomics and Computational Biology, Barts Cancer Institute, Barts and the London School of Medicine and Dentistry, Queen Mary University of London, Charterhouse S.q., London EC1M 6BQ, UK; (A.-M.B.); (T.A.G.)
| | - Trevor A. Graham
- Centre for Genomics and Computational Biology, Barts Cancer Institute, Barts and the London School of Medicine and Dentistry, Queen Mary University of London, Charterhouse S.q., London EC1M 6BQ, UK; (A.-M.B.); (T.A.G.)
| | - Ailsa Hart
- Inflammatory Bowel Disease Department, St. Mark’s Hospital, Watford R.d., Harrow HA1 3UJ, UK
| |
Collapse
|
169
|
Jha D, Smedsrud PH, Johansen D, de Lange T, Johansen HD, Halvorsen P, Riegler MA. A Comprehensive Study on Colorectal Polyp Segmentation With ResUNet++, Conditional Random Field and Test-Time Augmentation. IEEE J Biomed Health Inform 2021; 25:2029-2040. [PMID: 33400658 DOI: 10.1109/jbhi.2021.3049304] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Colonoscopy is considered the gold standard for detection of colorectal cancer and its precursors. Existing examination methods are, however, hampered by high overall miss-rate, and many abnormalities are left undetected. Computer-Aided Diagnosis systems based on advanced machine learning algorithms are touted as a game-changer that can identify regions in the colon overlooked by the physicians during endoscopic examinations, and help detect and characterize lesions. In previous work, we have proposed the ResUNet++ architecture and demonstrated that it produces more efficient results compared with its counterparts U-Net and ResUNet. In this paper, we demonstrate that further improvements to the overall prediction performance of the ResUNet++ architecture can be achieved by using Conditional Random Field (CRF) and Test-Time Augmentation (TTA). We have performed extensive evaluations and validated the improvements using six publicly available datasets: Kvasir-SEG, CVC-ClinicDB, CVC-ColonDB, ETIS-Larib Polyp DB, ASU-Mayo Clinic Colonoscopy Video Database, and CVC-VideoClinicDB. Moreover, we compare our proposed architecture and resulting model with other state-of-the-art methods. To explore the generalization capability of ResUNet++ on different publicly available polyp datasets, so that it could be used in a real-world setting, we performed an extensive cross-dataset evaluation. The experimental results show that applying CRF and TTA improves the performance on various polyp segmentation datasets both on the same dataset and cross-dataset. To check the model's performance on difficult to detect polyps, we selected, with the help of an expert gastroenterologist, 196 sessile or flat polyps that are less than ten millimeters in size. This additional data has been made available as a subset of Kvasir-SEG. Our approaches showed good results for flat or sessile and smaller polyps, which are known to be one of the major reasons for high polyp miss-rates. This is one of the significant strengths of our work and indicates that our methods should be investigated further for use in clinical practice.
Collapse
|
170
|
Smedsrud PH, Thambawita V, Hicks SA, Gjestang H, Nedrejord OO, Næss E, Borgli H, Jha D, Berstad TJD, Eskeland SL, Lux M, Espeland H, Petlund A, Nguyen DTD, Garcia-Ceja E, Johansen D, Schmidt PT, Toth E, Hammer HL, de Lange T, Riegler MA, Halvorsen P. Kvasir-Capsule, a video capsule endoscopy dataset. Sci Data 2021; 8:142. [PMID: 34045470 PMCID: PMC8160146 DOI: 10.1038/s41597-021-00920-z] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 04/15/2021] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) is predicted to have profound effects on the future of video capsule endoscopy (VCE) technology. The potential lies in improving anomaly detection while reducing manual labour. Existing work demonstrates the promising benefits of AI-based computer-assisted diagnosis systems for VCE. They also show great potential for improvements to achieve even better results. Also, medical data is often sparse and unavailable to the research community, and qualified medical personnel rarely have time for the tedious labelling work. We present Kvasir-Capsule, a large VCE dataset collected from examinations at a Norwegian Hospital. Kvasir-Capsule consists of 117 videos which can be used to extract a total of 4,741,504 image frames. We have labelled and medically verified 47,238 frames with a bounding box around findings from 14 different classes. In addition to these labelled images, there are 4,694,266 unlabelled frames included in the dataset. The Kvasir-Capsule dataset can play a valuable role in developing better algorithms in order to reach true potential of VCE technology.
Collapse
Affiliation(s)
- Pia H Smedsrud
- SimulaMet, Oslo, Norway.
- University of Oslo, Oslo, Norway.
- Augere Medical AS, Oslo, Norway.
| | | | - Steven A Hicks
- SimulaMet, Oslo, Norway
- Oslo Metropolitan University, Oslo, Norway
| | | | | | - Espen Næss
- SimulaMet, Oslo, Norway
- University of Oslo, Oslo, Norway
| | - Hanna Borgli
- SimulaMet, Oslo, Norway
- University of Oslo, Oslo, Norway
| | - Debesh Jha
- SimulaMet, Oslo, Norway
- UIT The Arctic University of Norway, Tromsø, Norway
| | | | | | | | | | | | | | | | - Dag Johansen
- UIT The Arctic University of Norway, Tromsø, Norway
| | - Peter T Schmidt
- Karolinska Institutet, Department of Medicine, Solna, Sweden
- Ersta Hospital, Department of Medicine, Stockholm, Sweden
| | - Ervin Toth
- Department of Gastroenterology, Skåne University Hospital, Malmö Lund University, Malmö, Sweden
| | - Hugo L Hammer
- SimulaMet, Oslo, Norway
- Oslo Metropolitan University, Oslo, Norway
| | - Thomas de Lange
- Department of Medical Research, Bærum Hospital, Gjettum, Norway
- Augere Medical AS, Oslo, Norway
- Medical Department, Sahlgrenska University Hospital-Mölndal Hospital, Göteborg, Sweden
- Department of Molecular and Clinical Medicine, Sahlgrenska Academy, University of Gothenburg, Göteborg, Sweden
| | | | - Pål Halvorsen
- SimulaMet, Oslo, Norway
- Oslo Metropolitan University, Oslo, Norway
| |
Collapse
|
171
|
Kim GH, Sung ES, Nam KW. Automated laryngeal mass detection algorithm for home-based self-screening test based on convolutional neural network. Biomed Eng Online 2021; 20:51. [PMID: 34034766 PMCID: PMC8144695 DOI: 10.1186/s12938-021-00886-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 05/11/2021] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND Early detection of laryngeal masses without periodic visits to hospitals is essential for improving the possibility of full recovery and the long-term survival ratio after prompt treatment, as well as reducing the risk of clinical infection. RESULTS We first propose a convolutional neural network model for automated laryngeal mass detection based on diagnostic images captured at hospitals. Thereafter, we propose a pilot system, composed of an embedded controller, a camera module, and an LCD display, that can be utilized for a home-based self-screening test. In terms of evaluating the model's performance, the experimental results indicated a final validation loss of 0.9152 and a F1-score of 0.8371 before post-processing. Additionally, the F1-score of the original computer algorithm with respect to 100 randomly selected color-printed test images was 0.8534 after post-processing while that of the embedded pilot system was 0.7672. CONCLUSIONS The proposed technique is expected to increase the ratio of early detection of laryngeal masses without the risk of clinical infection spread, which could help improve convenience and ensure safety of individuals, patients, and medical staff.
Collapse
Affiliation(s)
- Gun Ho Kim
- Interdisciplinary Program in Biomedical Engineering, School of Medicine, Pusan National University, Busan, South Korea
| | - Eui-Suk Sung
- Department of Otolaryngology-Head and Neck Surgery, Pusan National University Yangsan Hospital, Yangsan, South Korea.
- Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Yangsan, South Korea.
| | - Kyoung Won Nam
- Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Yangsan, South Korea.
- Department of Biomedical Engineering, Pusan National University Yangsan Hospital, Yangsan, South Korea.
- Department of Biomedical Engineering, School of Medicine, Pusan National University, 49 Busandaehak-ro, Mulgeum-eup, Yangsan, Gyeongsangnam-do, 50629, South Korea.
| |
Collapse
|
172
|
Kim KO, Kim EY. Application of Artificial Intelligence in the Detection and Characterization of Colorectal Neoplasm. Gut Liver 2021; 15:346-353. [PMID: 32773386 PMCID: PMC8129657 DOI: 10.5009/gnl20186] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 06/28/2020] [Indexed: 12/19/2022] Open
Abstract
Endoscpists always have tried to pursue a perfect colonoscopy, and application of artificial intelligence (AI) using deep-learning algorithms is one of the promising supportive options for detection and characterization of colorectal polyps during colonoscopy. Many retrospective studies conducted with real-time application of AI using convolutional neural networks have shown improved colorectal polyp detection. Moreover, a recent randomized clinical trial reported additional polyp detection with shorter analysis time. Studies conducted regarding polyp characterization provided additional promising results. Application of AI with narrow band imaging in real-time prediction of the pathology of diminutive polyps resulted in high diagnostic accuracy. In addition, application of AI with endocytoscopy or confocal laser endomicroscopy was investigated for real-time cellular diagnosis, and the diagnostic accuracy of some studies was comparable to that of pathologists. With AI technology, we can expect a higher polyp detection rate with reduced time and cost by avoiding unnecessary procedures, resulting in enhanced colonoscopy efficiency. However, for AI application in actual daily clinical practice, more prospective studies with minimized selection bias, consensus on standardized utilization, and regulatory approval are needed. (Gut Liver 2021;15:-353)
Collapse
Affiliation(s)
- Kyeong Ok Kim
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Yeungnam University College of Medicine, Daegu, Korea
| | - Eun Young Kim
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Daegu Catholic University School of Medicine, Daegu, Korea
| |
Collapse
|
173
|
Patel K, Bur AM, Wang G. Enhanced U-Net: A Feature Enhancement Network for Polyp Segmentation. PROCEEDINGS OF THE INTERNATIONAL ROBOTS & VISION CONFERENCE. INTERNATIONAL ROBOTS & VISION CONFERENCE 2021; 2021:181-188. [PMID: 34368816 PMCID: PMC8341462 DOI: 10.1109/crv52889.2021.00032] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Colonoscopy is a procedure to detect colorectal polyps which are the primary cause for developing colorectal cancer. However, polyp segmentation is a challenging task due to the diverse shape, size, color, and texture of polyps, shuttle difference between polyp and its background, as well as low contrast of the colonoscopic images. To address these challenges, we propose a feature enhancement network for accurate polyp segmentation in colonoscopy images. Specifically, the proposed network enhances the semantic information using the novel Semantic Feature Enhance Module (SFEM). Furthermore, instead of directly adding encoder features to the respective decoder layer, we introduce an Adaptive Global Context Module (AGCM), which focuses only on the encoder's significant and hard fine-grained features. The integration of these two modules improves the quality of features layer by layer, which in turn enhances the final feature representation. The proposed approach is evaluated on five colonoscopy datasets and demonstrates superior performance compared to other state-of-the-art models.
Collapse
Affiliation(s)
- Krushi Patel
- Department of Electrical Engineering and Computer Science, University of Kansas, Lawrence KS, USA, 66045
| | - Andrés M Bur
- Department of Otolaryngology-Head and Neck Surgery, University of Kansas, Kansas City, Kansas, USA, 66160
| | - Guanghui Wang
- Department of Computer Science, Ryerson University, Toronto ON, Canada, M5B 2K3
| |
Collapse
|
174
|
Yen HH, Wu PY, Su PY, Yang CW, Chen YY, Chen MF, Lin WC, Tsai CL, Lin KP. Performance Comparison of the Deep Learning and the Human Endoscopist for Bleeding Peptic Ulcer Disease. J Med Biol Eng 2021. [DOI: 10.1007/s40846-021-00608-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Abstract
Purpose
Management of peptic ulcer bleeding is clinically challenging. Accurate characterization of the bleeding during endoscopy is key for endoscopic therapy. This study aimed to assess whether a deep learning model can aid in the classification of bleeding peptic ulcer disease.
Methods
Endoscopic still images of patients (n = 1694) with peptic ulcer bleeding for the last 5 years were retrieved and reviewed. Overall, 2289 images were collected for deep learning model training, and 449 images were validated for the performance test. Two expert endoscopists classified the images into different classes based on their appearance. Four deep learning models, including Mobile Net V2, VGG16, Inception V4, and ResNet50, were proposed and pre-trained by ImageNet with the established convolutional neural network algorithm. A comparison of the endoscopists and trained deep learning model was performed to evaluate the model’s performance on a dataset of 449 testing images.
Results
The results first presented the performance comparisons of four deep learning models. The Mobile Net V2 presented the optimal performance of the proposal models. The Mobile Net V2 was chosen for further comparing the performance with the diagnostic results obtained by one senior and one novice endoscopists. The sensitivity and specificity were acceptable for the prediction of “normal” lesions in both 3-class and 4-class classifications. For the 3-class category, the sensitivity and specificity were 94.83% and 92.36%, respectively. For the 4-class category, the sensitivity and specificity were 95.40% and 92.70%, respectively. The interobserver agreement of the testing dataset of the model was moderate to substantial with the senior endoscopist. The accuracy of the determination of endoscopic therapy required and high-risk endoscopic therapy of the deep learning model was higher than that of the novice endoscopist.
Conclusions
In this study, the deep learning model performed better than inexperienced endoscopists. Further improvement of the model may aid in clinical decision-making during clinical practice, especially for trainee endoscopist.
Collapse
|
175
|
Cao C, Wang R, Yu Y, zhang H, Yu Y, Sun C. Gastric polyp detection in gastroscopic images using deep neural network. PLoS One 2021; 16:e0250632. [PMID: 33909671 PMCID: PMC8081222 DOI: 10.1371/journal.pone.0250632] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Accepted: 04/08/2021] [Indexed: 12/26/2022] Open
Abstract
This paper presents the research results of detecting gastric polyps with deep learning object detection method in gastroscopic images. Gastric polyps have various sizes. The difficulty of polyp detection is that small polyps are difficult to detect from the background. We propose a feature extraction and fusion module and combine it with the YOLOv3 network to form our network. This method performs better than other methods in the detection of small polyps because it can fuse the semantic information of high-level feature maps with low-level feature maps to help small polyps detection. In this work, we use a dataset of gastric polyps created by ourselves, containing 1433 training images and 508 validation images. We train and validate our network on our dataset. In comparison with other methods of polyps detection, our method has a significant improvement in precision, recall rate, F1, and F2 score. The precision, recall rate, F1 score, and F2 score of our method can achieve 91.6%, 86.2%, 88.8%, and 87.2%.
Collapse
Affiliation(s)
- Chanting Cao
- Beijing Engineering Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Ruilin Wang
- Beijing Engineering Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
| | - Yao Yu
- Beijing Engineering Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China
- * E-mail:
| | - Hui zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Ying Yu
- Beijing An Zhen Hospital, Beijing, China
| | - Changyin Sun
- School of Automation, Southeast University, Nanjing, China
| |
Collapse
|
176
|
Ozyoruk KB, Gokceler GI, Bobrow TL, Coskun G, Incetan K, Almalioglu Y, Mahmood F, Curto E, Perdigoto L, Oliveira M, Sahin H, Araujo H, Alexandrino H, Durr NJ, Gilbert HB, Turan M. EndoSLAM dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos. Med Image Anal 2021; 71:102058. [PMID: 33930829 DOI: 10.1016/j.media.2021.102058] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 01/23/2021] [Accepted: 03/29/2021] [Indexed: 02/07/2023]
Abstract
Deep learning techniques hold promise to develop dense topography reconstruction and pose estimation methods for endoscopic videos. However, currently available datasets do not support effective quantitative benchmarking. In this paper, we introduce a comprehensive endoscopic SLAM dataset consisting of 3D point cloud data for six porcine organs, capsule and standard endoscopy recordings, synthetically generated data as well as clinically in use conventional endoscope recording of the phantom colon with computed tomography(CT) scan ground truth. A Panda robotic arm, two commercially available capsule endoscopes, three conventional endoscopes with different camera properties, two high precision 3D scanners, and a CT scanner were employed to collect data from eight ex-vivo porcine gastrointestinal (GI)-tract organs and a silicone colon phantom model. In total, 35 sub-datasets are provided with 6D pose ground truth for the ex-vivo part: 18 sub-datasets for colon, 12 sub-datasets for stomach, and 5 sub-datasets for small intestine, while four of these contain polyp-mimicking elevations carried out by an expert gastroenterologist. To verify the applicability of this data for use with real clinical systems, we recorded a video sequence with a state-of-the-art colonoscope from a full representation silicon colon phantom. Synthetic capsule endoscopy frames from stomach, colon, and small intestine with both depth and pose annotations are included to facilitate the study of simulation-to-real transfer learning algorithms. Additionally, we propound Endo-SfMLearner, an unsupervised monocular depth and pose estimation method that combines residual networks with a spatial attention module in order to dictate the network to focus on distinguishable and highly textured tissue regions. The proposed approach makes use of a brightness-aware photometric loss to improve the robustness under fast frame-to-frame illumination changes that are commonly seen in endoscopic videos. To exemplify the use-case of the EndoSLAM dataset, the performance of Endo-SfMLearner is extensively compared with the state-of-the-art: SC-SfMLearner, Monodepth2, and SfMLearner. The codes and the link for the dataset are publicly available at https://github.com/CapsuleEndoscope/EndoSLAM. A video demonstrating the experimental setup and procedure is accessible as Supplementary Video 1.
Collapse
Affiliation(s)
| | | | - Taylor L Bobrow
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Gulfize Coskun
- Institute of Biomedical Engineering, Bogazici University, Turkey
| | - Kagan Incetan
- Institute of Biomedical Engineering, Bogazici University, Turkey
| | | | - Faisal Mahmood
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Cancer Data Science, Dana Farber Cancer Institute, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Eva Curto
- Institute for Systems and Robotics, University of Coimbra, Portugal
| | - Luis Perdigoto
- Institute for Systems and Robotics, University of Coimbra, Portugal
| | - Marina Oliveira
- Institute for Systems and Robotics, University of Coimbra, Portugal
| | - Hasan Sahin
- Institute of Biomedical Engineering, Bogazici University, Turkey
| | - Helder Araujo
- Institute for Systems and Robotics, University of Coimbra, Portugal
| | - Henrique Alexandrino
- Faculty of Medicine, Clinical Academic Center of Coimbra, University of Coimbra, Coimbra, Portugal
| | - Nicholas J Durr
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Hunter B Gilbert
- Department of Mechanical and Industrial Engineering, Louisiana State University, Baton Rouge, LA, USA
| | - Mehmet Turan
- Institute of Biomedical Engineering, Bogazici University, Turkey.
| |
Collapse
|
177
|
Development of a computer-aided detection system for colonoscopy and a publicly accessible large colonoscopy video database (with video). Gastrointest Endosc 2021; 93:960-967.e3. [PMID: 32745531 DOI: 10.1016/j.gie.2020.07.060] [Citation(s) in RCA: 87] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 07/25/2020] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND AIMS Artificial intelligence (AI)-assisted polyp detection systems for colonoscopic use are currently attracting attention because they may reduce the possibility of missed adenomas. However, few systems have the necessary regulatory approval for use in clinical practice. We aimed to develop an AI-assisted polyp detection system and to validate its performance using a large colonoscopy video database designed to be publicly accessible. METHODS To develop the deep learning-based AI system, 56,668 independent colonoscopy images were obtained from 5 centers for use as training images. To validate the trained AI system, consecutive colonoscopy videos taken at a university hospital between October 2018 and January 2019 were searched to construct a database containing polyps with unbiased variance. All images were annotated by endoscopists according to the presence or absence of polyps and the polyps' locations with bounding boxes. RESULTS A total of 1405 videos acquired during the study period were identified for the validation database, 797 of which contained at least 1 polyp. Of these, 100 videos containing 100 independent polyps and 13 videos negative for polyps were randomly extracted, resulting in 152,560 frames (49,799 positive frames and 102,761 negative frames) for the database. The AI showed 90.5% sensitivity and 93.7% specificity for frame-based analysis. The per-polyp sensitivities for all, diminutive, protruded, and flat polyps were 98.0%, 98.3%, 98.5%, and 97.0%, respectively. CONCLUSIONS Our trained AI system was validated with a new large publicly accessible colonoscopy database and could identify colorectal lesions with high sensitivity and specificity. (Clinical trial registration number: UMIN 000037064.).
Collapse
|
178
|
Real-time automatic polyp detection in colonoscopy using feature enhancement module and spatiotemporal similarity correlation unit. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102503] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
179
|
A-DenseUNet: Adaptive Densely Connected UNet for Polyp Segmentation in Colonoscopy Images with Atrous Convolution. SENSORS 2021; 21:s21041441. [PMID: 33669539 PMCID: PMC7922083 DOI: 10.3390/s21041441] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 02/14/2021] [Accepted: 02/17/2021] [Indexed: 01/05/2023]
Abstract
Colon carcinoma is one of the leading causes of cancer-related death in both men and women. Automatic colorectal polyp segmentation and detection in colonoscopy videos help endoscopists to identify colorectal disease more easily, making it a promising method to prevent colon cancer. In this study, we developed a fully automated pixel-wise polyp segmentation model named A-DenseUNet. The proposed architecture adapts different datasets, adjusting for the unknown depth of the network by sharing multiscale encoding information to the different levels of the decoder side. We also used multiple dilated convolutions with various atrous rates to observe a large field of view without increasing the computational cost and prevent loss of spatial information, which would cause dimensionality reduction. We utilized an attention mechanism to remove noise and inappropriate information, leading to the comprehensive re-establishment of contextual features. Our experiments demonstrated that the proposed architecture achieved significant segmentation results on public datasets. A-DenseUNet achieved a 90% Dice coefficient score on the Kvasir-SEG dataset and a 91% Dice coefficient score on the CVC-612 dataset, both of which were higher than the scores of other deep learning models such as UNet++, ResUNet, U-Net, PraNet, and ResUNet++ for segmenting polyps in colonoscopy images.
Collapse
|
180
|
Application of Artificial Intelligence in Gastrointestinal Endoscopy. J Clin Gastroenterol 2021; 55:110-120. [PMID: 32925304 DOI: 10.1097/mcg.0000000000001423] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Accepted: 08/07/2020] [Indexed: 12/24/2022]
Abstract
Artificial intelligence (AI), also known as computer-aided diagnosis, is a technology that enables machines to process information and functions at or above human level and has great potential in gastrointestinal endoscopy applications. At present, the research on medical image recognition usually adopts the deep-learning algorithm based on the convolutional neural network. AI has been used in gastrointestinal endoscopy including esophagogastroduodenoscopy, capsule endoscopy, colonoscopy, etc. AI can help endoscopic physicians improve the diagnosis rate of various lesions, reduce the rate of missed diagnosis, improve the quality of endoscopy, assess the severity of the disease, and improve the efficiency of endoscopy. The diversity, susceptibility, and imaging specificity of gastrointestinal endoscopic images are all difficulties and challenges on the road to intelligence. We need more large-scale, high-quality, multicenter prospective studies to explore the clinical applicability of AI, and ethical issues need to be taken into account.
Collapse
|
181
|
Comparison of deep learning and conventional machine learning methods for classification of colon polyp types. EUROBIOTECH JOURNAL 2021. [DOI: 10.2478/ebtj-2021-0006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Abstract
Determination of polyp types requires tissue biopsy during colonoscopy and then histopathological examination of the microscopic images which tremendously time-consuming and costly. The first aim of this study was to design a computer-aided diagnosis system to classify polyp types using colonoscopy images (optical biopsy) without the need for tissue biopsy. For this purpose, two different approaches were designed based on conventional machine learning (ML) and deep learning. Firstly, classification was performed using random forest approach by means of the features obtained from the histogram of gradients descriptor. Secondly, simple convolutional neural networks (CNN) based architecture was built to train with the colonoscopy images containing colon polyps. The performances of these approaches on two (adenoma & serrated vs. hyperplastic) or three (adenoma vs. hyperplastic vs. serrated) category classifications were investigated. Furthermore, the effect of imaging modality on the classification was also examined using white-light and narrow band imaging systems. The performance of these approaches was compared with the results obtained by 3 novice and 4 expert doctors. Two-category classification results showed that conventional ML approach achieved significantly better than the simple CNN based approach did in both narrow band and white-light imaging modalities. The accuracy reached almost 95% for white-light imaging. This performance surpassed the correct classification rate of all 7 doctors. Additionally, the second task (three-category) results indicated that the simple CNN architecture outperformed both conventional ML based approaches and the doctors. This study shows the feasibility of using conventional machine learning or deep learning based approaches in automatic classification of colon types on colonoscopy images.
Collapse
|
182
|
Chandrasekaran AC, Fu Z, Kraniski R, Wilson FP, Teaw S, Cheng M, Wang A, Ren S, Omar IM, Hinchcliff ME. Computer vision applied to dual-energy computed tomography images for precise calcinosis cutis quantification in patients with systemic sclerosis. Arthritis Res Ther 2021; 23:6. [PMID: 33407814 PMCID: PMC7788847 DOI: 10.1186/s13075-020-02392-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Accepted: 12/09/2020] [Indexed: 01/12/2023] Open
Abstract
Background Although treatments have been proposed for calcinosis cutis (CC) in patients with systemic sclerosis (SSc), a standardized and validated method for CC burden quantification is necessary to enable valid clinical trials. We tested the hypothesis that computer vision applied to dual-energy computed tomography (DECT) finger images is a useful approach for precise and accurate CC quantification in SSc patients. Methods De-identified 2-dimensional (2D) DECT images from SSc patients with clinically evident lesser finger CC lesions were obtained. An expert musculoskeletal radiologist confirmed accurate manual segmentation (subtraction) of the phalanges for each image as a gold standard, and a U-Net Convolutional Neural Network (CNN) computer vision model for segmentation of healthy phalanges was developed and tested. A validation study was performed in an independent dataset whereby two independent radiologists manually measured the longest length and perpendicular short axis of each lesion and then calculated an estimated area by assuming the lesion was elliptical using the formula long axis/2 × short axis/2 × π, and a computer scientist used a region growing technique to calculate the area of CC lesions. Spearman’s correlation coefficient, Lin’s concordance correlation coefficient with 95% confidence intervals (CI), and a Bland-Altman plot (Stata V 15.1, College Station, TX) were used to test for equivalence between the radiologists’ and the CNN algorithm-generated area estimates. Results Forty de-identified 2D DECT images from SSc patients with clinically evident finger CC lesions were obtained and divided into training (N = 30 with image rotation × 3 to expand the set to N = 120) and test sets (N = 10). In the training set, five hundred epochs (iterations) were required to train the CNN algorithm to segment phalanges from adjacent CC, and accurate segmentation was evaluated using the ten held-out images. To test model performance, CC lesional area estimates calculated by two independent radiologists and a computer scientist were compared (radiologist 1 vs. radiologist 2 and radiologist 1 vs. computer vision approach) using an independent test dataset comprised of 31 images (8 index finger and 23 other fingers). For the two radiologists’, and the radiologist vs. computer vision measurements, Spearman’s rho was 0.91 and 0.94, respectively, both p < 0.0001; Lin’s concordance correlation coefficient was 0.91 (95% CI 0.85–0.98, p < 0.001) and 0.95 (95% CI 0.91–0.99, p < 0.001); and Bland-Altman plots demonstrated a mean difference between radiologist vs. radiologist, and radiologist vs. computer vision area estimates of − 0.5 mm2 (95% limits of agreement − 10.0–9.0 mm2) and 1.7 mm2 (95% limits of agreement − 6.0–9.5 mm2, respectively. Conclusions We demonstrate that CNN quantification has a high degree of correlation with expert radiologist measurement of finger CC area measurements. Future work will include segmentation of 3-dimensional (3D) images for volumetric and density quantification, as well as validation in larger, independent cohorts.
Collapse
Affiliation(s)
- Anita C Chandrasekaran
- Yale School of Medicine, Section of Rheumatology, Allergy & Immunology, The Anlyan Center, 300 Cedar Street, PO BOX 208031, New Haven, CT, 06520, USA
| | - Zhicheng Fu
- Department of Computer Science, Illinois Institute of Technology, 10 W 31st St, Chicago, IL, 60616, USA.,Motorola Mobility LLC, 222 W Merchandise Mart Plaza #1800, Chicago, IL, 60654, USA
| | - Reid Kraniski
- Department of Radiology, Yale School of Medicine, 330 Cedar St, New Haven, CT, 06520, USA
| | - F Perry Wilson
- Clinical and Translational Research Accelerator, Department of Medicine, Yale School of Medicine, Temple Medical Center, 60 Temple Street Suite 6C, New Haven, CT, 06510, USA
| | - Shannon Teaw
- Yale School of Medicine, Section of Rheumatology, Allergy & Immunology, The Anlyan Center, 300 Cedar Street, PO BOX 208031, New Haven, CT, 06520, USA
| | - Michelle Cheng
- Yale School of Medicine, Section of Rheumatology, Allergy & Immunology, The Anlyan Center, 300 Cedar Street, PO BOX 208031, New Haven, CT, 06520, USA
| | - Annie Wang
- Department of Radiology, Yale School of Medicine, 330 Cedar St, New Haven, CT, 06520, USA
| | - Shangping Ren
- Department of Computer Science, Illinois Institute of Technology, 10 W 31st St, Chicago, IL, 60616, USA.,Department of Computer Science, San Diego State University, 5500 Campanile Drive, San Diego, CA, 92182, USA
| | - Imran M Omar
- Department of Radiology, Northwestern University Feinberg School of Medicine, 676 N St Clair St, Chicago, IL, 60611, USA
| | - Monique E Hinchcliff
- Yale School of Medicine, Section of Rheumatology, Allergy & Immunology, The Anlyan Center, 300 Cedar Street, PO BOX 208031, New Haven, CT, 06520, USA. .,Clinical and Translational Research Accelerator, Department of Medicine, Yale School of Medicine, Temple Medical Center, 60 Temple Street Suite 6C, New Haven, CT, 06510, USA. .,Department of Medicine, Division of Rheumatology, Northwestern University Feinberg School of Medicine, 240 E. Huron Street, Suite M-300, Chicago, IL, 60611, USA.
| |
Collapse
|
183
|
|
184
|
Sánchez-Peralta LF, Pagador JB, Sánchez-Margallo FM. Artificial Intelligence for Colorectal Polyps in Colonoscopy. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_308-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
185
|
Artificial Intelligence in Medicine. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_163-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
186
|
Guo Z, Nemoto D, Zhu X, Li Q, Aizawa M, Utano K, Isohata N, Endo S, Kawarai Lefor A, Togashi K. Polyp detection algorithm can detect small polyps: Ex vivo reading test compared with endoscopists. Dig Endosc 2021; 33:162-169. [PMID: 32173917 DOI: 10.1111/den.13670] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Revised: 03/09/2020] [Accepted: 03/12/2020] [Indexed: 12/16/2022]
Abstract
BACKGROUND AND STUDY AIMS Small polyps are occasionally missed during colonoscopy. This study was conducted to validate the diagnostic performance of a polyp-detection algorithm to alert endoscopists to unrecognized lesions. METHODS A computer-aided detection (CADe) algorithm was developed based on convolutional neural networks using training data from 1991 still colonoscopy images from 283 subjects with adenomatous polyps. The CADe algorithm was evaluated on a validation dataset including 50 short videos with 1-2 polyps (3.5 ± 1.5 mm, range 2-8 mm) and 50 videos without polyps. Two expert colonoscopists and two physicians in training separately read the same videos, blinded to the presence of polyps. The CADe algorithm was also evaluated using eight full videos with polyps and seven full videos without a polyp. RESULTS The per-video sensitivity of CADe for polyp detection was 88% and the per-frame false-positive rate was 2.8%, with a confidence level of ≥30%. The per-video sensitivity of both experts was 88%, and the sensitivities of the two physicians in training were 84% and 76%. For each reader, the frames with missed polyps appearing on short videos were significantly less than the frames with detected polyps, but no trends were observed regarding polyp size, morphology or color. For full video readings, per-polyp sensitivity was 100% with a per-frame false-positive rate of 1.7%, and per-frame specificity of 98.3%. CONCLUSIONS The sensitivity of CADe to detect small polyps was almost equivalent to experts and superior to physicians in training. A clinical trial using CADe is warranted.
Collapse
Affiliation(s)
- Zhe Guo
- Biomedical Information Engineering Lab, The University of Aizu, Fukushima, Japan
| | - Daiki Nemoto
- Department of Coloproctology, Aizu Medical Center, Fukushima Medical University, Fukushima, Japan
| | - Xin Zhu
- Biomedical Information Engineering Lab, The University of Aizu, Fukushima, Japan
| | - Qin Li
- Biomedical Information Engineering Lab, The University of Aizu, Fukushima, Japan
| | - Masato Aizawa
- Department of Coloproctology, Aizu Medical Center, Fukushima Medical University, Fukushima, Japan
| | - Kenichi Utano
- Department of Coloproctology, Aizu Medical Center, Fukushima Medical University, Fukushima, Japan
| | - Noriyuki Isohata
- Department of Coloproctology, Aizu Medical Center, Fukushima Medical University, Fukushima, Japan
| | - Shungo Endo
- Department of Coloproctology, Aizu Medical Center, Fukushima Medical University, Fukushima, Japan
| | | | - Kazutomo Togashi
- Department of Coloproctology, Aizu Medical Center, Fukushima Medical University, Fukushima, Japan
| |
Collapse
|
187
|
Le A, Salifu MO, McFarlane IM. Artificial Intelligence in Colorectal Polyp Detection and Characterization. INTERNATIONAL JOURNAL OF CLINICAL RESEARCH & TRIALS 2021; 6:157. [PMID: 33884326 PMCID: PMC8057724 DOI: 10.15344/2456-8007/2021/157] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
BACKGROUND Over the past 20 years, the advancement of artificial intelligence (AI) and deep learning (DL) has allowed for fast sorting and analysis of large sets of data. In the field of gastroenterology, colorectal screening procedures produces an abundance of data through video and imaging. With AI and DL, this information can be used to create systems where automatic polyp detection and characterization is possible. Convoluted Neural Networks (CNNs) have proven to be an effective way to increase polyp detection and ultimately adenoma detection rates. Different methods of polyp characterization of being hyperplastic vs. adenomatous or non-neoplastic vs. neoplastic has also been investigated showing promising results. FINDINGS The rate of missed polyps on colonoscopy can be as high as 25%. At the beginning of the 2000s, hand-crafted machine learning (ML) algorithms were created and trained retrospectively on colonoscopy images and videos, achieving high sensitivity, specificity, and accuracy of over 90% in many of the studies. Over time, the advancement of DL and CNNs has allowed algorithms to be trained on non-medical images and applied retrospectively to colonoscopy videos and images with similar results. Within the past few years, these algorithms have been applied in real-time colonoscopies and has shown mixed results, one showing no difference while others showing increased polyp detection.Various methods of polyp characterization have also been investigated. Through AI, DL, and CNNs polyps can be identified has hyperplastic/adenomatous or non-neoplastic/neoplastic with high sensitivity, specificity, and accuracy. One of the research areas in polyp characterization is how to capture the polyp image. This paper looks at different modalities of characterizing polyps such as magnifying narrow band imaging (NBI), endocytoscopy, laser-induced florescent spectroscopy, auto-florescent endoscopy, and white-light endoscopy. CONCLUSIONS Overall, much progress has been made in automatic detection and characterization of polyps in real time. Barring ethical or mass adoption setbacks, it is inevitable that AI will be involved in the field of GI, especially in colorectal polyp detection and identification.
Collapse
Affiliation(s)
| | | | - Isabel M. McFarlane
- Corresponding Author: Dr. Isabel M. McFarlane, Clinical Assistant Professor of Medicine, Director, Third Year Internal Medicine Clerkship, Department of Internal Medicine, Brooklyn, NY 11203, USA Tel: 718-270-2390, Fax: 718-270-1324;
| |
Collapse
|
188
|
Misawa M, Kudo SE, Mori Y, Maeda Y, Ogawa Y, Ichimasa K, Kudo T, Wakamura K, Hayashi T, Miyachi H, Baba T, Ishida F, Itoh H, Oda M, Mori K. Current status and future perspective on artificial intelligence for lower endoscopy. Dig Endosc 2021; 33:273-284. [PMID: 32969051 DOI: 10.1111/den.13847] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Revised: 09/03/2020] [Accepted: 09/16/2020] [Indexed: 12/23/2022]
Abstract
The global incidence and mortality rate of colorectal cancer remains high. Colonoscopy is regarded as the gold standard examination for detecting and eradicating neoplastic lesions. However, there are some uncertainties in colonoscopy practice that are related to limitations in human performance. First, approximately one-fourth of colorectal neoplasms are missed on a single colonoscopy. Second, it is still difficult for non-experts to perform adequately regarding optical biopsy. Third, recording of some quality indicators (e.g. cecal intubation, bowel preparation, and withdrawal speed) which are related to adenoma detection rate, is sometimes incomplete. With recent improvements in machine learning techniques and advances in computer performance, artificial intelligence-assisted computer-aided diagnosis is being increasingly utilized by endoscopists. In particular, the emergence of deep-learning, data-driven machine learning techniques have made the development of computer-aided systems easier than that of conventional machine learning techniques, the former currently being considered the standard artificial intelligence engine of computer-aided diagnosis by colonoscopy. To date, computer-aided detection systems seem to have improved the rate of detection of neoplasms. Additionally, computer-aided characterization systems may have the potential to improve diagnostic accuracy in real-time clinical practice. Furthermore, some artificial intelligence-assisted systems that aim to improve the quality of colonoscopy have been reported. The implementation of computer-aided system clinical practice may provide additional benefits such as helping in educational poorly performing endoscopists and supporting real-time clinical decision-making. In this review, we have focused on computer-aided diagnosis during colonoscopy reported by gastroenterologists and discussed its status, limitations, and future prospects.
Collapse
Affiliation(s)
- Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
- Clinical Effectiveness Research Group, Institute of Heath and Society, University of Oslo, Oslo, Norway
| | - Yasuharu Maeda
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yushi Ogawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Katsuro Ichimasa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Toyoki Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Kunihiko Wakamura
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Takemasa Hayashi
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Hideyuki Miyachi
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Toshiyuki Baba
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Fumio Ishida
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Hayato Itoh
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| |
Collapse
|
189
|
Yang X, Wei Q, Zhang C, Zhou K, Kong L, Jiang W. Colon Polyp Detection and Segmentation Based on Improved MRCNN. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 2021; 70:1-10. [DOI: 10.1109/tim.2020.3038011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
190
|
Luo Y, Zhang Y, Liu M, Lai Y, Liu P, Wang Z, Xing T, Huang Y, Li Y, Li A, Wang Y, Luo X, Liu S, Han Z. Artificial Intelligence-Assisted Colonoscopy for Detection of Colon Polyps: a Prospective, Randomized Cohort Study. J Gastrointest Surg 2021; 25:2011-2018. [PMID: 32968933 PMCID: PMC8321985 DOI: 10.1007/s11605-020-04802-4] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Accepted: 09/09/2020] [Indexed: 01/31/2023]
Abstract
BACKGROUND AND AIMS Improving the rate of polyp detection is an important measure to prevent colorectal cancer (CRC). Real-time automatic polyp detection systems, through deep learning methods, can learn and perform specific endoscopic tasks previously performed by endoscopists. The purpose of this study was to explore whether a high-performance, real-time automatic polyp detection system could improve the polyp detection rate (PDR) in the actual clinical environment. METHODS The selected patients underwent same-day, back-to-back colonoscopies in a random order, with either traditional colonoscopy or artificial intelligence (AI)-assisted colonoscopy performed first by different experienced endoscopists (> 3000 colonoscopies). The primary outcome was the PDR. It was registered with clinicaltrials.gov . (NCT047126265). RESULTS In this study, we randomized 150 patients. The AI system significantly increased the PDR (34.0% vs 38.7%, p < 0.001). In addition, AI-assisted colonoscopy increased the detection of polyps smaller than 6 mm (69 vs 91, p < 0.001), but no difference was found with regard to larger lesions. CONCLUSIONS A real-time automatic polyp detection system can increase the PDR, primarily for diminutive polyps. However, a larger sample size is still needed in the follow-up study to further verify this conclusion. TRIAL REGISTRATION clinicaltrials.gov Identifier: NCT047126265.
Collapse
Affiliation(s)
- Yuchen Luo
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515 China
| | - Yi Zhang
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515 China
| | - Ming Liu
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515 China
| | - Yihong Lai
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515 China
| | - Panpan Liu
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515 China
| | - Zhen Wang
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515 China
| | - Tongyin Xing
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515 China
| | - Ying Huang
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515 China
| | - Yue Li
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515 China
| | - Aiming Li
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515 China
| | - Yadong Wang
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515 China
| | - Xiaobei Luo
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515 China
| | - Side Liu
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515 China
| | - Zelong Han
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515 China
| |
Collapse
|
191
|
Pannala R, Krishnan K, Melson J, Parsi MA, Schulman AR, Sullivan S, Trikudanathan G, Trindade AJ, Watson RR, Maple JT, Lichtenstein DR. Artificial intelligence in gastrointestinal endoscopy. VIDEOGIE : AN OFFICIAL VIDEO JOURNAL OF THE AMERICAN SOCIETY FOR GASTROINTESTINAL ENDOSCOPY 2020; 5:598-613. [PMID: 33319126 PMCID: PMC7732722 DOI: 10.1016/j.vgie.2020.08.013] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
BACKGROUND AND AIMS Artificial intelligence (AI)-based applications have transformed several industries and are widely used in various consumer products and services. In medicine, AI is primarily being used for image classification and natural language processing and has great potential to affect image-based specialties such as radiology, pathology, and gastroenterology (GE). This document reviews the reported applications of AI in GE, focusing on endoscopic image analysis. METHODS The MEDLINE database was searched through May 2020 for relevant articles by using key words such as machine learning, deep learning, artificial intelligence, computer-aided diagnosis, convolutional neural networks, GI endoscopy, and endoscopic image analysis. References and citations of the retrieved articles were also evaluated to identify pertinent studies. The manuscript was drafted by 2 authors and reviewed in person by members of the American Society for Gastrointestinal Endoscopy Technology Committee and subsequently by the American Society for Gastrointestinal Endoscopy Governing Board. RESULTS Deep learning techniques such as convolutional neural networks have been used in several areas of GI endoscopy, including colorectal polyp detection and classification, analysis of endoscopic images for diagnosis of Helicobacter pylori infection, detection and depth assessment of early gastric cancer, dysplasia in Barrett's esophagus, and detection of various abnormalities in wireless capsule endoscopy images. CONCLUSIONS The implementation of AI technologies across multiple GI endoscopic applications has the potential to transform clinical practice favorably and improve the efficiency and accuracy of current diagnostic methods.
Collapse
Key Words
- ADR, adenoma detection rate
- AI, artificial intelligence
- AMR, adenoma miss rate
- ANN, artificial neural network
- BE, Barrett’s esophagus
- CAD, computer-aided diagnosis
- CADe, CAD studies for colon polyp detection
- CADx, CAD studies for colon polyp classification
- CI, confidence interval
- CNN, convolutional neural network
- CRC, colorectal cancer
- DL, deep learning
- GI, gastroenterology
- HD-WLE, high-definition white light endoscopy
- HDWL, high-definition white light
- ML, machine learning
- NBI, narrow-band imaging
- NPV, negative predictive value
- PIVI, preservation and Incorporation of Valuable Endoscopic Innovations
- SVM, support vector machine
- VLE, volumetric laser endomicroscopy
- WCE, wireless capsule endoscopy
- WL, white light
Collapse
Affiliation(s)
- Rahul Pannala
- Department of Gastroenterology and Hepatology, Mayo Clinic, Scottsdale, Arizona
| | - Kumar Krishnan
- Division of Gastroenterology, Department of Internal Medicine, Harvard Medical School and Massachusetts General Hospital, Boston, Massachusetts
| | - Joshua Melson
- Division of Digestive Diseases, Department of Internal Medicine, Rush University Medical Center, Chicago, Illinois
| | - Mansour A Parsi
- Section for Gastroenterology and Hepatology, Tulane University Health Sciences Center, New Orleans, Louisiana
| | - Allison R Schulman
- Department of Gastroenterology, Michigan Medicine, University of Michigan, Ann Arbor, Michigan
| | - Shelby Sullivan
- Division of Gastroenterology and Hepatology, University of Colorado School of Medicine, Aurora, Colorado
| | - Guru Trikudanathan
- Department of Gastroenterology, Hepatology and Nutrition, University of Minnesota, Minneapolis, Minnesota
| | - Arvind J Trindade
- Department of Gastroenterology, Zucker School of Medicine at Hofstra/Northwell, Long Island Jewish Medical Center, New Hyde Park, New York
| | - Rabindra R Watson
- Department of Gastroenterology, Interventional Endoscopy Services, California Pacific Medical Center, San Francisco, California
| | - John T Maple
- Division of Digestive Diseases and Nutrition, University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma
| | - David R Lichtenstein
- Division of Gastroenterology, Boston Medical Center, Boston University School of Medicine, Boston, Massachusetts
| |
Collapse
|
192
|
Wittenberg T, Raithel M. Artificial Intelligence-Based Polyp Detection in Colonoscopy: Where Have We Been, Where Do We Stand, and Where Are We Headed? Visc Med 2020; 36:428-438. [PMID: 33447598 PMCID: PMC7768101 DOI: 10.1159/000512438] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Accepted: 10/20/2020] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND In the past, image-based computer-assisted diagnosis and detection systems have been driven mainly from the field of radiology, and more specifically mammography. Nevertheless, with the availability of large image data collections (known as the "Big Data" phenomenon) in correlation with developments from the domain of artificial intelligence (AI) and particularly so-called deep convolutional neural networks, computer-assisted detection of adenomas and polyps in real-time during screening colonoscopy has become feasible. SUMMARY With respect to these developments, the scope of this contribution is to provide a brief overview about the evolution of AI-based detection of adenomas and polyps during colonoscopy of the past 35 years, starting with the age of "handcrafted geometrical features" together with simple classification schemes, over the development and use of "texture-based features" and machine learning approaches, and ending with current developments in the field of deep learning using convolutional neural networks. In parallel, the need and necessity of large-scale clinical data will be discussed in order to develop such methods, up to commercially available AI products for automated detection of polyps (adenoma and benign neoplastic lesions). Finally, a short view into the future is made regarding further possibilities of AI methods within colonoscopy. KEY MESSAGES Research of image-based lesion detection in colonoscopy data has a 35-year-old history. Milestones such as the Paris nomenclature, texture features, big data, and deep learning were essential for the development and availability of commercial AI-based systems for polyp detection.
Collapse
|
193
|
de Almeida Thomaz V, Sierra-Franco CA, Raposo AB. Training data enhancements for improving colonic polyp detection using deep convolutional neural networks. Artif Intell Med 2020; 111:101988. [PMID: 33461694 DOI: 10.1016/j.artmed.2020.101988] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Revised: 08/04/2020] [Accepted: 11/03/2020] [Indexed: 01/10/2023]
Abstract
BACKGROUND Over the last years, the most relevant results in the context of polyp detection were achieved through deep learning techniques. However, the most common obstacles in this field are the small datasets with a reduced number of samples and the lack of data variability. This paper describes a method to reduce this limitation and improve polyp detection results using publicly available colonoscopic datasets. METHODS To address this issue, we increased the number and variety of images from the original dataset. Our method consists on adding polyps to the dataset images. The developed algorithm performs a rigorous selection of the best region within the image to receive the polyp. This procedure preserves the realistic features of the images while creating more diverse samples for training purposes. Our method allows copying existing polyps to new non-polypoid target regions. We also develop a strategy to generate new and more varied polyps through generative adversarial neural networks. Hence, the developed approach enriches the training data, creating automatically new samples with their appropriate labels. RESULTS We applied the proposed data enhancement over a colonic polyp dataset. Thus, we can assess the effectiveness of our approach through a Faster R-CNN detection model. Performance results show improvements over the polyp detections while reducing the false-negative rate. The experimental results also show better recall metrics in comparison with both the original training set and other studies in the literature. CONCLUSION We demonstrate that our proposed method has the potential to increase the data variability and number of samples in a reduced polyp dataset, improving the polyp detection rate and recall values. These results open new possibilities for advancing the study and implementation of new methods to improve computer-assisted medical image analysis.
Collapse
Affiliation(s)
- Victor de Almeida Thomaz
- Pontifical Catholic University of Rio de Janeiro, Rua Marquês de São Vicente, 225, Gávea Rio de Janeiro, Brazil.
| | - Cesar A Sierra-Franco
- Tecgraf Institute of Technical-Scientific Software Development, Rua Marquês de São Vicente, 225, Gávea Rio de Janeiro, Brazil.
| | - Alberto B Raposo
- Tecgraf Institute of Technical-Scientific Software Development, Rua Marquês de São Vicente, 225, Gávea Rio de Janeiro, Brazil.
| |
Collapse
|
194
|
Attardo S, Chandrasekar VT, Spadaccini M, Maselli R, Patel HK, Desai M, Capogreco A, Badalamenti M, Galtieri PA, Pellegatta G, Fugazza A, Carrara S, Anderloni A, Occhipinti P, Hassan C, Sharma P, Repici A. Artificial intelligence technologies for the detection of colorectal lesions: The future is now. World J Gastroenterol 2020; 26:5606-5616. [PMID: 33088155 PMCID: PMC7545398 DOI: 10.3748/wjg.v26.i37.5606] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 06/30/2020] [Accepted: 09/16/2020] [Indexed: 02/06/2023] Open
Abstract
Several studies have shown a significant adenoma miss rate up to 35% during screening colonoscopy, especially in patients with diminutive adenomas. The use of artificial intelligence (AI) in colonoscopy has been gaining popularity by helping endoscopists in polyp detection, with the aim to increase their adenoma detection rate (ADR) and polyp detection rate (PDR) in order to reduce the incidence of interval cancers. The efficacy of deep convolutional neural network (DCNN)-based AI system for polyp detection has been trained and tested in ex vivo settings such as colonoscopy still images or videos. Recent trials have evaluated the real-time efficacy of DCNN-based systems showing promising results in term of improved ADR and PDR. In this review we reported data from the preliminary ex vivo experiences and summarized the results of the initial randomized controlled trials.
Collapse
Affiliation(s)
- Simona Attardo
- Department of Endoscopy and Digestive Disease, AOU Maggiore della Carità, Novara 28100, Italy
| | | | - Marco Spadaccini
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
- Department of Biomedical Sciences, Humanitas University, Rozzano 20089, Italy
| | - Roberta Maselli
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | - Harsh K Patel
- Department of Internal Medicine, Ochsner Clinic Foundation, New Orleans, LA 70124, United States
| | - Madhav Desai
- Department of Gastroenterology and Hepatology, Kansas City VA Medical Center, Kansas City, MO 66045, United States
| | - Antonio Capogreco
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
- Department of Biomedical Sciences, Humanitas University, Rozzano 20089, Italy
| | - Matteo Badalamenti
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | | | - Gaia Pellegatta
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | - Alessandro Fugazza
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | - Silvia Carrara
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | - Andrea Anderloni
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
| | - Pietro Occhipinti
- Department of Endoscopy and Digestive Disease, AOU Maggiore della Carità, Novara 28100, Italy
| | - Cesare Hassan
- Endoscopy Unit, Nuovo Regina Margherita Hospital, Roma 00153, Italy
| | - Prateek Sharma
- Department of Gastroenterology and Hepatology, Kansas City VA Medical Center, Kansas City, MO 66045, United States
| | - Alessandro Repici
- Department of Endoscopy, Humanitas Research Hospital, Rozzano 20089, Italy
- Department of Biomedical Sciences, Humanitas University, Rozzano 20089, Italy
| |
Collapse
|
195
|
Wang KW, Dong M. Potential applications of artificial intelligence in colorectal polyps and cancer: Recent advances and prospects. World J Gastroenterol 2020; 26:5090-5100. [PMID: 32982111 PMCID: PMC7495038 DOI: 10.3748/wjg.v26.i34.5090] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 07/01/2020] [Accepted: 08/12/2020] [Indexed: 02/06/2023] Open
Abstract
Since the advent of artificial intelligence (AI) technology, it has been constantly studied and has achieved rapid development. The AI assistant system is expected to improve the quality of automatic polyp detection and classification. It could also help prevent endoscopists from missing polyps and make an accurate optical diagnosis. These functions provided by AI could result in a higher adenoma detection rate and decrease the cost of polypectomy for hyperplastic polyps. In addition, AI has good performance in the staging, diagnosis, and segmentation of colorectal cancer. This article provides an overview of recent research focusing on the application of AI in colorectal polyps and cancer and highlights the advances achieved.
Collapse
Affiliation(s)
- Ke-Wei Wang
- Department of Gastrointestinal Surgery, the First Affiliated Hospital of China Medical University, Shenyang 110001, Liaoning Province, China
| | - Ming Dong
- Department of Gastrointestinal Surgery, the First Affiliated Hospital of China Medical University, Shenyang 110001, Liaoning Province, China
| |
Collapse
|
196
|
|
197
|
Borgli H, Thambawita V, Smedsrud PH, Hicks S, Jha D, Eskeland SL, Randel KR, Pogorelov K, Lux M, Nguyen DTD, Johansen D, Griwodz C, Stensland HK, Garcia-Ceja E, Schmidt PT, Hammer HL, Riegler MA, Halvorsen P, de Lange T. HyperKvasir, a comprehensive multi-class image and video dataset for gastrointestinal endoscopy. Sci Data 2020; 7:283. [PMID: 32859981 PMCID: PMC7455694 DOI: 10.1038/s41597-020-00622-y] [Citation(s) in RCA: 103] [Impact Index Per Article: 20.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Accepted: 07/21/2020] [Indexed: 02/08/2023] Open
Abstract
Artificial intelligence is currently a hot topic in medicine. However, medical data is often sparse and hard to obtain due to legal restrictions and lack of medical personnel for the cumbersome and tedious process to manually label training data. These constraints make it difficult to develop systems for automatic analysis, like detecting disease or other lesions. In this respect, this article presents HyperKvasir, the largest image and video dataset of the gastrointestinal tract available today. The data is collected during real gastro- and colonoscopy examinations at Bærum Hospital in Norway and partly labeled by experienced gastrointestinal endoscopists. The dataset contains 110,079 images and 374 videos, and represents anatomical landmarks as well as pathological and normal findings. The total number of images and video frames together is around 1 million. Initial experiments demonstrate the potential benefits of artificial intelligence-based computer-assisted diagnosis systems. The HyperKvasir dataset can play a valuable role in developing better algorithms and computer-assisted examination systems not only for gastro- and colonoscopy, but also for other fields in medicine.
Collapse
Affiliation(s)
- Hanna Borgli
- SimulaMet, Oslo, Norway
- University of Oslo, Oslo, Norway
| | | | - Pia H Smedsrud
- SimulaMet, Oslo, Norway
- University of Oslo, Oslo, Norway
- Augere Medical AS, Oslo, Norway
| | - Steven Hicks
- SimulaMet, Oslo, Norway
- Oslo Metropolitan University, Oslo, Norway
| | - Debesh Jha
- SimulaMet, Oslo, Norway
- UIT The Arctic University of Norway, Tromsø, Norway
| | | | | | | | | | | | - Dag Johansen
- UIT The Arctic University of Norway, Tromsø, Norway
| | | | - Håkon K Stensland
- University of Oslo, Oslo, Norway
- Simula Research Laboratory, Oslo, Norway
| | | | - Peter T Schmidt
- Department of Medicine (Solna), Karolinska Institutet, Stockholm, Sweden
- Department of Medicine, Ersta hospital, Stockholm, Sweden
| | - Hugo L Hammer
- SimulaMet, Oslo, Norway
- Oslo Metropolitan University, Oslo, Norway
| | | | - Pål Halvorsen
- SimulaMet, Oslo, Norway.
- Oslo Metropolitan University, Oslo, Norway.
| | - Thomas de Lange
- Department of Medical Research, Bærum Hospital, Bærum, Norway
- Augere Medical AS, Oslo, Norway
- Medical Department, Sahlgrenska University Hospital-Mölndal, Mölndal, Sweden
| |
Collapse
|
198
|
Meng J, Xue L, Chang Y, Zhang J, Chang S, Liu K, Liu S, Wang B, Yang K. Automatic detection and segmentation of adenomatous colorectal polyps during colonoscopy using Mask R-CNN. Open Life Sci 2020; 15:588-596. [PMID: 33817247 PMCID: PMC7968546 DOI: 10.1515/biol-2020-0055] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 05/27/2020] [Accepted: 06/03/2020] [Indexed: 12/27/2022] Open
Abstract
Colorectal cancer (CRC) is one of the main alimentary tract system malignancies affecting people worldwide. Adenomatous polyps are precursors of CRC, and therefore, preventing the development of these lesions may also prevent subsequent malignancy. However, the adenoma detection rate (ADR), a measure of the ability of a colonoscopist to identify and remove precancerous colorectal polyps, varies significantly among endoscopists. Here, we attempt to use a convolutional neural network (CNN) to generate a unique computer-aided diagnosis (CAD) system by exploring in detail the multiple-scale performance of deep neural networks. We applied this system to 3,375 hand-labeled images from the screening colonoscopies of 1,197 patients; of whom, 3,045 were assigned to the training dataset and 330 to the testing dataset. The images were diagnosed simply as either an adenomatous or non-adenomatous polyp. When applied to the testing dataset, our CNN-CAD system achieved a mean average precision of 89.5%. We conclude that the proposed framework could increase the ADR and decrease the incidence of interval CRCs, although further validation through large multicenter trials is required.
Collapse
Affiliation(s)
- Jie Meng
- Department of Gastroenterology and Hepatology, Tianjin Medical University General Hospital, Tianjin 300052, China.,Department of Gastroenterology, Affiliated Hospital of Hebei University, Baoding 071000, China
| | - Linyan Xue
- College of Quality and Technical Supervision, Hebei University, Baoding 071002, China
| | - Ying Chang
- Department of Gastroenterology, Affiliated Hospital of Hebei University, Baoding 071000, China
| | - Jianguang Zhang
- Department of Gastroenterology, Affiliated Hospital of Hebei University, Baoding 071000, China
| | - Shilong Chang
- College of Quality and Technical Supervision, Hebei University, Baoding 071002, China
| | - Kun Liu
- College of Quality and Technical Supervision, Hebei University, Baoding 071002, China
| | - Shuang Liu
- College of Quality and Technical Supervision, Hebei University, Baoding 071002, China
| | - Bangmao Wang
- Department of Gastroenterology and Hepatology, Tianjin Medical University General Hospital, Tianjin 300052, China
| | - Kun Yang
- College of Quality and Technical Supervision, Hebei University, Baoding 071002, China
| |
Collapse
|
199
|
Sánchez-Peralta LF, Bote-Curiel L, Picón A, Sánchez-Margallo FM, Pagador JB. Deep learning to find colorectal polyps in colonoscopy: A systematic literature review. Artif Intell Med 2020; 108:101923. [PMID: 32972656 DOI: 10.1016/j.artmed.2020.101923] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Revised: 03/03/2020] [Accepted: 07/01/2020] [Indexed: 02/07/2023]
Abstract
Colorectal cancer has a great incidence rate worldwide, but its early detection significantly increases the survival rate. Colonoscopy is the gold standard procedure for diagnosis and removal of colorectal lesions with potential to evolve into cancer and computer-aided detection systems can help gastroenterologists to increase the adenoma detection rate, one of the main indicators for colonoscopy quality and predictor for colorectal cancer prevention. The recent success of deep learning approaches in computer vision has also reached this field and has boosted the number of proposed methods for polyp detection, localization and segmentation. Through a systematic search, 35 works have been retrieved. The current systematic review provides an analysis of these methods, stating advantages and disadvantages for the different categories used; comments seven publicly available datasets of colonoscopy images; analyses the metrics used for reporting and identifies future challenges and recommendations. Convolutional neural networks are the most used architecture together with an important presence of data augmentation strategies, mainly based on image transformations and the use of patches. End-to-end methods are preferred over hybrid methods, with a rising tendency. As for detection and localization tasks, the most used metric for reporting is the recall, while Intersection over Union is highly used in segmentation. One of the major concerns is the difficulty for a fair comparison and reproducibility of methods. Even despite the organization of challenges, there is still a need for a common validation framework based on a large, annotated and publicly available database, which also includes the most convenient metrics to report results. Finally, it is also important to highlight that efforts should be focused in the future on proving the clinical value of the deep learning based methods, by increasing the adenoma detection rate.
Collapse
Affiliation(s)
| | - Luis Bote-Curiel
- Jesús Usón Minimally Invasive Surgery Centre, Ctra. N-521, km 41.8, 10071 Cáceres, Spain.
| | - Artzai Picón
- Tecnalia, Parque Científico y Tecnológico de Bizkaia, C/ Astondo bidea, Edificio 700, 48160 Derio, Spain.
| | | | - J Blas Pagador
- Jesús Usón Minimally Invasive Surgery Centre, Ctra. N-521, km 41.8, 10071 Cáceres, Spain.
| |
Collapse
|
200
|
Fast colonic polyp detection using a Hamilton–Jacobi approach to non-dominated sorting. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.102035] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|