51
|
Ramzan M, Raza M, Sharif MI, Kadry S. Gastrointestinal Tract Polyp Anomaly Segmentation on Colonoscopy Images Using Graft-U-Net. J Pers Med 2022; 12:jpm12091459. [PMID: 36143244 PMCID: PMC9503374 DOI: 10.3390/jpm12091459] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 08/28/2022] [Accepted: 09/01/2022] [Indexed: 11/21/2022] Open
Abstract
Computer-aided polyp segmentation is a crucial task that supports gastroenterologists in examining and resecting anomalous tissue in the gastrointestinal tract. The disease polyps grow mainly in the colorectal area of the gastrointestinal tract and in the mucous membrane, which has protrusions of micro-abnormal tissue that increase the risk of incurable diseases such as cancer. So, the early examination of polyps can decrease the chance of the polyps growing into cancer, such as adenomas, which can change into cancer. Deep learning-based diagnostic systems play a vital role in diagnosing diseases in the early stages. A deep learning method, Graft-U-Net, is proposed to segment polyps using colonoscopy frames. Graft-U-Net is a modified version of UNet, which comprises three stages, including the preprocessing, encoder, and decoder stages. The preprocessing technique is used to improve the contrast of the colonoscopy frames. Graft-U-Net comprises encoder and decoder blocks where the encoder analyzes features, while the decoder performs the features’ synthesizing processes. The Graft-U-Net model offers better segmentation results than existing deep learning models. The experiments were conducted using two open-access datasets, Kvasir-SEG and CVC-ClinicDB. The datasets were prepared from the large bowel of the gastrointestinal tract by performing a colonoscopy procedure. The anticipated model outperforms in terms of its mean Dice of 96.61% and mean Intersection over Union (mIoU) of 82.45% with the Kvasir-SEG dataset. Similarly, with the CVC-ClinicDB dataset, the method achieved a mean Dice of 89.95% and an mIoU of 81.38%.
Collapse
Affiliation(s)
- Muhammad Ramzan
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
- Correspondence:
| | - Muhammad Imran Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos 999095, Lebanon
| |
Collapse
|
52
|
Jiang Y, Chen J, Gong C, Wang TD, Seibel EJ. Deep-Learning-Based Real-Time and Automatic Target-to-Background Ratio Calculation in Fluorescence Endoscopy for Cancer Detection and Localization. Diagnostics (Basel) 2022; 12:diagnostics12092031. [PMID: 36140433 PMCID: PMC9497969 DOI: 10.3390/diagnostics12092031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 08/15/2022] [Accepted: 08/15/2022] [Indexed: 12/24/2022] Open
Abstract
Esophageal adenocarcinoma (EAC) is a deadly cancer that is rising rapidly in incidence. The early detection of EAC with curative intervention greatly improves the prognoses of patients. A scanning fiber endoscope (SFE) using fluorescence-labeled peptides that bind rapidly to epidermal growth factor receptors showed a promising performance for early EAC detection. Target-to-background (T/B) ratios were calculated to quantify the fluorescence images for neoplasia lesion classification. This T/B calculation is generally based on lesion segmentation with the Chan–Vese algorithm, which may require hyperparameter adjustment when segmenting frames with different brightness and contrasts, which impedes automation to real-time video. Deep learning models are more robust to these changes, while accurate pixel-level segmentation ground truth is challenging to establish in the medical field. Since within our dataset the ground truth contained only a frame-level diagnosis, we proposed a computer-aided diagnosis (CAD) system to calculate the T/B ratio in real time. A two-step process using convolutional neural networks (CNNs) was developed to achieve automatic suspicious frame selection and lesion segmentation for T/B calculation. In the segmentation model training for Step 2, the lesion labels were generated with a manually tuned Chan–Vese algorithm using the labeled and predicted suspicious frames from Step 1. In Step 1, we designed and trained deep CNNs to select suspicious frames using a diverse and representative set of 3427 SFE images collected from 25 patient videos from two clinical trials. We tested the models on 1039 images from 10 different SFE patient videos and achieved a sensitivity of 96.4%, a specificity of 96.6%, a precision of 95.5%, and an area under the receiver operating characteristic curve of 0.989. In Step 2, 1006 frames containing suspicious lesions were used for training for fluorescence target segmentation. The segmentation models were tested on two clinical datasets with 100 SFE frames each and achieved mean intersection-over-union values of 0.89 and 0.88, respectively. The T/B ratio calculations based on our segmentation results were similar to the manually tuned Chan–Vese algorithm, which were 1.71 ± 0.22 and 1.72 ± 0.28, respectively, with a p-value of 0.872. With the graphic processing unit (GPU), the proposed two-step CAD system achieved 50 fps for frame selection and 15 fps for segmentation and T/B calculation, which showed that the frame rejection in Step 1 improved the diagnostic efficiency. This CAD system with T/B ratio as the real-time indicator is designed to guide biopsies and surgeries and to serve as a reliable second observer to localize and outline suspicious lesions highlighted by fluorescence probes topically applied in organs where cancer originates in the epithelia.
Collapse
Affiliation(s)
- Yang Jiang
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
| | - Jing Chen
- Division of Gastroenterology, Department of Internal Medicine, University of Michigan, Ann Arbor, MI 48109, USA
| | - Chen Gong
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
| | - Thomas D. Wang
- Division of Gastroenterology, Department of Internal Medicine, University of Michigan, Ann Arbor, MI 48109, USA
- Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
| | - Eric J. Seibel
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
- Correspondence:
| |
Collapse
|
53
|
Shaukat A, Tuskey A, Rao VL, Dominitz JA, Murad MH, Keswani RN, Bazerbachi F, Day LW. Interventions to improve adenoma detection rates for colonoscopy. Gastrointest Endosc 2022; 96:171-183. [PMID: 35680469 DOI: 10.1016/j.gie.2022.03.026] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 03/25/2022] [Indexed: 02/08/2023]
Affiliation(s)
- Aasma Shaukat
- Division of Gastroenterology, Department of Medicine, NYU Grossman School of Medicine, New York, New York, USA
| | - Anne Tuskey
- Division of Gastroenterology, Department of Medicine, University of Virginia, Arlington, Virginia, USA
| | - Vijaya L Rao
- Section of Gastroenterology, Hepatology, and Nutrition, Department of Medicine, The University of Chicago, Chicago, Illinois, USA
| | - Jason A Dominitz
- Division of Gastroenterology, Department of Medicine, Puget Sound Veterans Affairs Medical Center and University of Washington, Seattle, Washington, USA
| | - M Hassan Murad
- Division of Public Health, Infectious Diseases and Occupational Medicine, Department of Medicine, Mayo Clinic, Rochester, Minnesota, USA
| | - Rajesh N Keswani
- Division of Gastroenterology, Department of Medicine, Northwestern University, Chicago, Illinois, USA
| | - Fateh Bazerbachi
- Division of Gastroenterology, CentraCare, Interventional Endoscopy Program, St Cloud, Minnesota, USA
| | - Lukejohn W Day
- Division of Gastroenterology, Department of Medicine, Zuckerberg San Francisco General Hospital and University of San Francisco, San Francisco, California, USA
| | | |
Collapse
|
54
|
Tavanapong W, Oh J, Riegler MA, Khaleel M, Mittal B, de Groen PC. Artificial Intelligence for Colonoscopy: Past, Present, and Future. IEEE J Biomed Health Inform 2022; 26:3950-3965. [PMID: 35316197 PMCID: PMC9478992 DOI: 10.1109/jbhi.2022.3160098] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
During the past decades, many automated image analysis methods have been developed for colonoscopy. Real-time implementation of the most promising methods during colonoscopy has been tested in clinical trials, including several recent multi-center studies. All trials have shown results that may contribute to prevention of colorectal cancer. We summarize the past and present development of colonoscopy video analysis methods, focusing on two categories of artificial intelligence (AI) technologies used in clinical trials. These are (1) analysis and feedback for improving colonoscopy quality and (2) detection of abnormalities. Our survey includes methods that use traditional machine learning algorithms on carefully designed hand-crafted features as well as recent deep-learning methods. Lastly, we present the gap between current state-of-the-art technology and desirable clinical features and conclude with future directions of endoscopic AI technology development that will bridge the current gap.
Collapse
|
55
|
Zhu Y, Hu P, Li X, Tian Y, Bai X, Liang T, Li J. Multiscale unsupervised domain adaptation for automatic pancreas segmentation in CT volumes using adversarial learning. Med Phys 2022; 49:5799-5818. [PMID: 35833617 DOI: 10.1002/mp.15827] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 04/28/2022] [Accepted: 05/27/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Computer-aided automatic pancreas segmentation is essential for early diagnosis and treatment of pancreatic diseases. However, the annotation of pancreas images requires professional doctors and considerable expenditure. Due to imaging differences among various institution population, scanning devices and imaging protocols etc., significant degradation in the performance of model inference results is prone to occur when models trained with domain-specific (usually institution-specific) datasets are directly applied to new (other centers/institutions) domain data. In this paper, we propose a novel unsupervised domain adaptation method based on adversarial learning to address pancreas segmentation challenges with the lack of annotations and domain shift interference. METHODS A 3D semantic segmentation model with attention module and residual module is designed as the backbone pancreas segmentation model. In both segmentation model and domain adaptation discriminator network, a multiscale progressively weighted structure is introduced to acquire different field of views. Features of labeled data and unlabeled data are fed in pairs into the proposed multiscale discriminator to learn domain-specific characteristics. Then the unlabeled data features with pseudo-domain label are fed to the discriminator to acquire domain-ambiguous information. With this adversarial learning strategy, the performance of the segmentation network is enhanced to segment unseen unlabeled data. RESULTS Experiments were conducted on two public annotated datasets as source datasets respectively and one private dataset as target dataset, where annotations were not used for the training process but only for evaluation. The 3D segmentation model achieves comparative performance with state-of-the-art pancreas segmentation methods on source domain. After implementing our domain adaptation architecture, the average Dice Similarity Coefficient(DSC) of the segmentation model trained on the NIH-TCIA source dataset increases from 58.79% to 72.73% on the local hospital dataset, while the performance of the target domain segmentation model transferred from the MSD source dataset rises from 62.34% to 71.17%. CONCLUSIONS Correlation of features across data domains are utilized to train the pancreas segmentation model on unlabeled data domain, improving the generalization of the model. Our results demonstrate that the proposed method enables the segmentation model to make meaningful segmentation for unseen data of the training set. In the future, the proposed method has the potential to apply segmentation model trained on public dataset to clinical unannotated CT images from local hospital, effectively assisting radiologists in clinical practice. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Yan Zhu
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, 310027, China
| | - Peijun Hu
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, 310027, China.,Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, 311100, China
| | - Xiang Li
- Department of Hepatobiliary and Pancreatic Surgery, the First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310006, China.,Zhejiang Provincial Key Laboratory of Pancreatic Disease, Hangzhou, 310006, China
| | - Yu Tian
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, 310027, China
| | - Xueli Bai
- Department of Hepatobiliary and Pancreatic Surgery, the First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310006, China.,Zhejiang Provincial Key Laboratory of Pancreatic Disease, Hangzhou, 310006, China
| | - Tingbo Liang
- Department of Hepatobiliary and Pancreatic Surgery, the First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310006, China.,Zhejiang Provincial Key Laboratory of Pancreatic Disease, Hangzhou, 310006, China
| | - Jingsong Li
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, 310027, China.,Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, 311100, China
| |
Collapse
|
56
|
Adjei PE, Lonseko ZM, Du W, Zhang H, Rao N. Examining the effect of synthetic data augmentation in polyp detection and segmentation. Int J Comput Assist Radiol Surg 2022; 17:1289-1302. [PMID: 35678960 DOI: 10.1007/s11548-022-02651-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 04/21/2022] [Indexed: 12/17/2022]
Abstract
PURPOSE As with several medical image analysis tasks based on deep learning, gastrointestinal image analysis is plagued with data scarcity, privacy concerns and an insufficient number of pathology samples. This study examines the generation and utility of synthetic samples of colonoscopy images with polyps for data augmentation. METHODS We modify and train a pix2pix model to generate synthetic colonoscopy samples with polyps to augment the original dataset. Subsequently, we create a variety of datasets by varying the quantity of synthetic samples and traditional augmentation samples, to train a U-Net network and Faster R-CNN model for segmentation and detection of polyps, respectively. We compare the performance of the models when trained with the resulting datasets in terms of F1 score, intersection over union, precision and recall. Further, we compare the performances of the models with unseen polyp datasets to assess their generalization ability. RESULTS The average F1 coefficient and intersection over union are improved with increasing number of synthetic samples in U-Net over all test datasets. The performance of the Faster R-CNN model is also improved in terms of polyp detection, while decreasing the false-negative rate. Further, the experimental results for polyp detection outperform similar studies in the literature on the ETIS-PolypLaribDB dataset. CONCLUSION By varying the quantity of synthetic and traditional augmentation, there is the potential to control the sensitivity of deep learning models in polyp segmentation and detection. Further, GAN-based augmentation is a viable option for improving the performance of models for polyp segmentation and detection.
Collapse
Affiliation(s)
- Prince Ebenezer Adjei
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.,Department of Computer Engineering, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
| | - Zenebe Markos Lonseko
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Wenju Du
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Han Zhang
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Nini Rao
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China. .,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| |
Collapse
|
57
|
Rao BH, Trieu JA, Nair P, Gressel G, Venu M, Venu RP. Artificial intelligence in endoscopy: More than what meets the eye in screening colonoscopy and endosonographic evaluation of pancreatic lesions. Artif Intell Gastrointest Endosc 2022; 3:16-30. [DOI: 10.37126/aige.v3.i3.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 03/07/2022] [Accepted: 05/07/2022] [Indexed: 02/06/2023] Open
|
58
|
Carteri RB, Grellert M, Borba DL, Marroni CA, Fernandes SA. Machine learning approaches using blood biomarkers in non-alcoholic fatty liver diseases. Artif Intell Gastroenterol 2022; 3:80-87. [DOI: 10.35712/aig.v3.i3.80] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 04/15/2022] [Accepted: 05/08/2022] [Indexed: 02/06/2023] Open
|
59
|
FECC-Net: A Novel Feature Enhancement and Context Capture Network Based on Brain MRI Images for Lesion Segmentation. Brain Sci 2022; 12:brainsci12060765. [PMID: 35741650 PMCID: PMC9221241 DOI: 10.3390/brainsci12060765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 05/25/2022] [Accepted: 06/04/2022] [Indexed: 12/10/2022] Open
Abstract
In recent years, the increasing incidence of morbidity of brain stroke has made fast and accurate segmentation of lesion areas from brain MRI images important. With the development of deep learning, segmentation methods based on the computer have become a solution to assist clinicians in early diagnosis and treatment planning. Nevertheless, the variety of lesion sizes in brain MRI images and the roughness of the boundary of the lesion pose challenges to the accuracy of the segmentation algorithm. Current mainstream medical segmentation models are not able to solve these challenges due to their insufficient use of image features and context information. This paper proposes a novel feature enhancement and context capture network (FECC-Net), which is mainly composed of an atrous spatial pyramid pooling (ASPP) module and an enhanced encoder. In particular, the ASPP model uses parallel convolution operations with different sampling rates to enrich multi-scale features and fully capture image context information in order to process lesions of different sizes. The enhanced encoder obtains deep semantic features and shallow boundary features in the feature extraction process to achieve image feature enhancement, which is helpful for restoration of the lesion boundaries. We divide the pathological image into three levels according to the number of pixels in the real mask area and evaluate FECC-Net on an open dataset called Anatomical Tracings of Lesions After Stroke (ATLAS). The experimental results show that our FECC-Net outperforms mainstream methods, such as DoubleU-Net and TransUNet. Especially in small target tasks, FECC-Net is 4.09% ahead of DoubleU-Net on the main indicator DSC. Therefore, FECC-Net is encouraging and can be relied upon for brain MRI image applications.
Collapse
|
60
|
Krenzer A, Makowski K, Hekalo A, Fitting D, Troya J, Zoller WG, Hann A, Puppe F. Fast machine learning annotation in the medical domain: a semi-automated video annotation tool for gastroenterologists. Biomed Eng Online 2022; 21:33. [PMID: 35614504 PMCID: PMC9134702 DOI: 10.1186/s12938-022-01001-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 04/25/2022] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND Machine learning, especially deep learning, is becoming more and more relevant in research and development in the medical domain. For all the supervised deep learning applications, data is the most critical factor in securing successful implementation and sustaining the progress of the machine learning model. Especially gastroenterological data, which often involves endoscopic videos, are cumbersome to annotate. Domain experts are needed to interpret and annotate the videos. To support those domain experts, we generated a framework. With this framework, instead of annotating every frame in the video sequence, experts are just performing key annotations at the beginning and the end of sequences with pathologies, e.g., visible polyps. Subsequently, non-expert annotators supported by machine learning add the missing annotations for the frames in-between. METHODS In our framework, an expert reviews the video and annotates a few video frames to verify the object's annotations for the non-expert. In a second step, a non-expert has visual confirmation of the given object and can annotate all following and preceding frames with AI assistance. After the expert has finished, relevant frames will be selected and passed on to an AI model. This information allows the AI model to detect and mark the desired object on all following and preceding frames with an annotation. Therefore, the non-expert can adjust and modify the AI predictions and export the results, which can then be used to train the AI model. RESULTS Using this framework, we were able to reduce workload of domain experts on average by a factor of 20 on our data. This is primarily due to the structure of the framework, which is designed to minimize the workload of the domain expert. Pairing this framework with a state-of-the-art semi-automated AI model enhances the annotation speed further. Through a prospective study with 10 participants, we show that semi-automated annotation using our tool doubles the annotation speed of non-expert annotators compared to a well-known state-of-the-art annotation tool. CONCLUSION In summary, we introduce a framework for fast expert annotation for gastroenterologists, which reduces the workload of the domain expert considerably while maintaining a very high annotation quality. The framework incorporates a semi-automated annotation system utilizing trained object detection models. The software and framework are open-source.
Collapse
Affiliation(s)
- Adrian Krenzer
- Department of Artificial Intelligence and Knowledge Systems, Sanderring 2, 97070, Würzburg, Germany.
| | - Kevin Makowski
- Department of Artificial Intelligence and Knowledge Systems, Sanderring 2, 97070, Würzburg, Germany
| | - Amar Hekalo
- Department of Artificial Intelligence and Knowledge Systems, Sanderring 2, 97070, Würzburg, Germany
| | - Daniel Fitting
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080, Würzburg, Germany
| | - Joel Troya
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080, Würzburg, Germany
| | - Wolfram G Zoller
- Department of Internal Medicine and Gastroenterology, Katharinenhospital, Kriegsbergstrasse 60, 70174, Stuttgart, Germany
| | - Alexander Hann
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080, Würzburg, Germany
| | - Frank Puppe
- Department of Artificial Intelligence and Knowledge Systems, Sanderring 2, 97070, Würzburg, Germany
| |
Collapse
|
61
|
Ahmad OF, González-Bueno Puyal J, Brandao P, Kader R, Abbasi F, Hussein M, Haidry RJ, Toth D, Mountney P, Seward E, Vega R, Stoyanov D, Lovat LB. Performance of artificial intelligence for detection of subtle and advanced colorectal neoplasia. Dig Endosc 2022; 34:862-869. [PMID: 34748665 DOI: 10.1111/den.14187] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Revised: 10/22/2021] [Accepted: 11/05/2021] [Indexed: 02/05/2023]
Abstract
OBJECTIVES There is uncertainty regarding the efficacy of artificial intelligence (AI) software to detect advanced subtle neoplasia, particularly flat lesions and sessile serrated lesions (SSLs), due to low prevalence in testing datasets and prospective trials. This has been highlighted as a top research priority for the field. METHODS An AI algorithm was evaluated on four video test datasets containing 173 polyps (35,114 polyp-positive frames and 634,988 polyp-negative frames) specifically enriched with flat lesions and SSLs, including a challenging dataset containing subtle advanced neoplasia. The challenging dataset was also evaluated by eight endoscopists (four independent, four trainees, according to the Joint Advisory Group on gastrointestinal endoscopy [JAG] standards in the UK). RESULTS In the first two video datasets, the algorithm achieved per-polyp sensitivities of 100% and 98.9%. Per-frame sensitivities were 84.1% and 85.2%. In the subtle dataset, the algorithm detected a significantly higher number of polyps (P < 0.0001), compared to JAG-independent and trainee endoscopists, achieving per-polyp sensitivities of 79.5%, 37.2% and 11.5%, respectively. Furthermore, when considering subtle polyps detected by both the algorithm and at least one endoscopist, the AI detected polyps significantly faster on average. CONCLUSIONS The AI based algorithm achieved high per-polyp sensitivities for advanced colorectal neoplasia, including flat lesions and SSLs, outperforming both JAG independent and trainees on a very challenging dataset containing subtle lesions that could have been overlooked easily and contribute to interval colorectal cancer. Further prospective trials should evaluate AI to detect subtle advanced neoplasia in higher risk populations for colorectal cancer.
Collapse
Affiliation(s)
- Omer F Ahmad
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London
- Division of Surgery and Interventional Sciences, University College London, London, UK
- Gastrointestinal Services, University College London Hospital, London, UK
| | - Juana González-Bueno Puyal
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London
- Odin Vision Ltd, London, UK
| | - Patrick Brandao
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London
- Odin Vision Ltd, London, UK
| | - Rawen Kader
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London
- Division of Surgery and Interventional Sciences, University College London, London, UK
| | - Faisal Abbasi
- Division of Surgery and Interventional Sciences, University College London, London, UK
| | - Mohamed Hussein
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London
- Division of Surgery and Interventional Sciences, University College London, London, UK
| | - Rehan J Haidry
- Division of Surgery and Interventional Sciences, University College London, London, UK
- Gastrointestinal Services, University College London Hospital, London, UK
| | | | | | - Ed Seward
- Gastrointestinal Services, University College London Hospital, London, UK
| | - Roser Vega
- Gastrointestinal Services, University College London Hospital, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London
- Division of Surgery and Interventional Sciences, University College London, London, UK
- Gastrointestinal Services, University College London Hospital, London, UK
| |
Collapse
|
62
|
Sharma P, Balabantaray BK, Bora K, Mallik S, Kasugai K, Zhao Z. An Ensemble-Based Deep Convolutional Neural Network for Computer-Aided Polyps Identification From Colonoscopy. Front Genet 2022; 13:844391. [PMID: 35559018 PMCID: PMC9086187 DOI: 10.3389/fgene.2022.844391] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 03/14/2022] [Indexed: 01/16/2023] Open
Abstract
Colorectal cancer (CRC) is the third leading cause of cancer death globally. Early detection and removal of precancerous polyps can significantly reduce the chance of CRC patient death. Currently, the polyp detection rate mainly depends on the skill and expertise of gastroenterologists. Over time, unidentified polyps can develop into cancer. Machine learning has recently emerged as a powerful method in assisting clinical diagnosis. Several classification models have been proposed to identify polyps, but their performance has not been comparable to an expert endoscopist yet. Here, we propose a multiple classifier consultation strategy to create an effective and powerful classifier for polyp identification. This strategy benefits from recent findings that different classification models can better learn and extract various information within the image. Therefore, our Ensemble classifier can derive a more consequential decision than each individual classifier. The extracted combined information inherits the ResNet's advantage of residual connection, while it also extracts objects when covered by occlusions through depth-wise separable convolution layer of the Xception model. Here, we applied our strategy to still frames extracted from a colonoscopy video. It outperformed other state-of-the-art techniques with a performance measure greater than 95% in each of the algorithm parameters. Our method will help researchers and gastroenterologists develop clinically applicable, computational-guided tools for colonoscopy screening. It may be extended to other clinical diagnoses that rely on image.
Collapse
Affiliation(s)
- Pallabi Sharma
- Department of Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, India
| | - Bunil Kumar Balabantaray
- Department of Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, India
| | - Kangkana Bora
- Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Saurav Mallik
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Kunio Kasugai
- Department of Gastroenterology, Aichi Medical University, Nagakute, Japan
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Human Genetics Center, School of Public Health, The University of Texas Health Science Center at Houston, Houston, TX, United States
- MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, TX, United States
| |
Collapse
|
63
|
Nogueira-Rodríguez A, Reboiro-Jato M, Glez-Peña D, López-Fernández H. Performance of Convolutional Neural Networks for Polyp Localization on Public Colonoscopy Image Datasets. Diagnostics (Basel) 2022; 12:898. [PMID: 35453946 PMCID: PMC9027927 DOI: 10.3390/diagnostics12040898] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 03/31/2022] [Accepted: 04/01/2022] [Indexed: 01/10/2023] Open
Abstract
Colorectal cancer is one of the most frequent malignancies. Colonoscopy is the de facto standard for precancerous lesion detection in the colon, i.e., polyps, during screening studies or after facultative recommendation. In recent years, artificial intelligence, and especially deep learning techniques such as convolutional neural networks, have been applied to polyp detection and localization in order to develop real-time CADe systems. However, the performance of machine learning models is very sensitive to changes in the nature of the testing instances, especially when trying to reproduce results for totally different datasets to those used for model development, i.e., inter-dataset testing. Here, we report the results of testing of our previously published polyp detection model using ten public colonoscopy image datasets and analyze them in the context of the results of other 20 state-of-the-art publications using the same datasets. The F1-score of our recently published model was 0.88 when evaluated on a private test partition, i.e., intra-dataset testing, but it decayed, on average, by 13.65% when tested on ten public datasets. In the published research, the average intra-dataset F1-score is 0.91, and we observed that it also decays in the inter-dataset setting to an average F1-score of 0.83.
Collapse
Affiliation(s)
- Alba Nogueira-Rodríguez
- CINBIO, Department of Computer Science, ESEI-Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain; (A.N.-R.); (M.R.-J.); (D.G.-P.)
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| | - Miguel Reboiro-Jato
- CINBIO, Department of Computer Science, ESEI-Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain; (A.N.-R.); (M.R.-J.); (D.G.-P.)
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| | - Daniel Glez-Peña
- CINBIO, Department of Computer Science, ESEI-Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain; (A.N.-R.); (M.R.-J.); (D.G.-P.)
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| | - Hugo López-Fernández
- CINBIO, Department of Computer Science, ESEI-Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain; (A.N.-R.); (M.R.-J.); (D.G.-P.)
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| |
Collapse
|
64
|
Nisha JS, Gopi VP, Palanisamy P. CLASSIFICATION OF INFORMATIVE FRAMES IN COLONOSCOPY VIDEO BASED ON IMAGE ENHANCEMENT AND PHOG FEATURE EXTRACTION. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2022; 34. [DOI: 10.4015/s1016237222500156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2025]
Abstract
Colonoscopy allows doctors to check the abnormalities in the intestinal tract without any surgical operations. The major problem in the Computer-Aided Diagnosis (CAD) of colonoscopy images is the low illumination condition of the images. This study aims to provide an image enhancement method and feature extraction and classification techniques for detecting polyps in colonoscopy images. We propose a novel image enhancement method with a Pyramid Histogram of Oriented Gradients (PHOG) feature extractor to detect polyps in the colonoscopy images. The approach is evaluated across different classifiers, such as Multi-Layer Perceptron (MLP), Adaboost, Support Vector Machine (SVM), and Random Forest. The proposed method has been trained using the publicly available databases CVC ClinicDB and tested in ETIS Larib and CVC ColonDB. The proposed approach outperformed the existing state-of-the-art methods on both databases. The reliability of the classifiers’ performance was examined by comparing their F1 score, precision, F2 score, recall, and accuracy. PHOG with Random Forest classifier outperformed the existing methods in terms of recall of 97.95%, precision 98.46%, F1 score 98.20%, F2 score of 98.00%, and accuracy of 98.21% in the CVC-ColonDB. In the ETIS-LARIB dataset it attained a recall value of 96.83%, precision 98.65%, F1 score 97.73%, F2 score 98.59%, and accuracy of 97.75%. We observed that the proposed image enhancement method with PHOG feature extraction and the Random Forest classifier will help doctors to evaluate and analyze anomalies from colonoscopy data and make decisions quickly.
Collapse
Affiliation(s)
- J. S. Nisha
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| | - Varun P. Gopi
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| | - P. Palanisamy
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| |
Collapse
|
65
|
Simple U-net based synthetic polyp image generation: Polyp to negative and negative to polyp. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103491] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
66
|
Hasan MM, Islam N, Rahman MM. Gastrointestinal polyp detection through a fusion of contourlet transform and Neural features. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022. [DOI: 10.1016/j.jksuci.2019.12.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
67
|
Nisha J, P. Gopi V, Palanisamy P. Automated colorectal polyp detection based on image enhancement and dual-path CNN architecture. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103465] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
68
|
Nisha JS, Gopi VP, Palanisamy P. AUTOMATED POLYP DETECTION IN COLONOSCOPY VIDEOS USING IMAGE ENHANCEMENT AND SALIENCY DETECTION ALGORITHM. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2022; 34. [DOI: 10.4015/s1016237222500016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2025]
Abstract
Colonoscopy has proven to be an active diagnostic tool that examines the lower half of the digestive system’s anomalies. This paper confers a Computer-Aided Detection (CAD) method for polyps from colonoscopy images that helps to diagnose the early stage of Colorectal Cancer (CRC). The proposed method consists primarily of image enhancement, followed by the creation of a saliency map, feature extraction using the Histogram of Oriented-Gradients (HOG) feature extractor, and classification using the Support Vector Machine (SVM). We present an efficient image enhancement algorithm for highlighting clinically significant features in colonoscopy images. The proposed enhancement approach can improve the overall contrast and brightness by minimizing the effects of inconsistent illumination conditions. Detailed experiments have been conducted using the publicly available colonoscopy databases CVC ClinicDB, CVC ColonDB and the ETIS Larib. The performance measures are found to be in terms of precision (91.69%), recall (81.53%), F1-score (86.31%) and F2-score (89.45%) for the CVC ColonDB database and precision (90.29%), recall (61.73%), F1-score (73.32%) and F2-score (82.64%) for the ETIS Larib database. Comparison with the futuristic method shows that the proposed approach surpasses the existing one in terms of precision, F1-score, and F2-score. The proposed enhancement with saliency-based selection significantly reduced the number of search windows, resulting in an efficient polyp detection algorithm.
Collapse
Affiliation(s)
- J. S. Nisha
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, 620015, Tamil Nadu, India
| | - V. P. Gopi
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, 620015, Tamil Nadu, India
| | - P. Palanisamy
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, 620015, Tamil Nadu, India
| |
Collapse
|
69
|
Maier-Hein L, Eisenmann M, Sarikaya D, März K, Collins T, Malpani A, Fallert J, Feussner H, Giannarou S, Mascagni P, Nakawala H, Park A, Pugh C, Stoyanov D, Vedula SS, Cleary K, Fichtinger G, Forestier G, Gibaud B, Grantcharov T, Hashizume M, Heckmann-Nötzel D, Kenngott HG, Kikinis R, Mündermann L, Navab N, Onogur S, Roß T, Sznitman R, Taylor RH, Tizabi MD, Wagner M, Hager GD, Neumuth T, Padoy N, Collins J, Gockel I, Goedeke J, Hashimoto DA, Joyeux L, Lam K, Leff DR, Madani A, Marcus HJ, Meireles O, Seitel A, Teber D, Ückert F, Müller-Stich BP, Jannin P, Speidel S. Surgical data science - from concepts toward clinical translation. Med Image Anal 2022; 76:102306. [PMID: 34879287 PMCID: PMC9135051 DOI: 10.1016/j.media.2021.102306] [Citation(s) in RCA: 101] [Impact Index Per Article: 33.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 11/03/2021] [Accepted: 11/08/2021] [Indexed: 02/06/2023]
Abstract
Recent developments in data science in general and machine learning in particular have transformed the way experts envision the future of surgery. Surgical Data Science (SDS) is a new research field that aims to improve the quality of interventional healthcare through the capture, organization, analysis and modeling of data. While an increasing number of data-driven approaches and clinical applications have been studied in the fields of radiological and clinical data science, translational success stories are still lacking in surgery. In this publication, we shed light on the underlying reasons and provide a roadmap for future advances in the field. Based on an international workshop involving leading researchers in the field of SDS, we review current practice, key achievements and initiatives as well as available standards and tools for a number of topics relevant to the field, namely (1) infrastructure for data acquisition, storage and access in the presence of regulatory constraints, (2) data annotation and sharing and (3) data analytics. We further complement this technical perspective with (4) a review of currently available SDS products and the translational progress from academia and (5) a roadmap for faster clinical translation and exploitation of the full potential of SDS, based on an international multi-round Delphi process.
Collapse
Affiliation(s)
- Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany.
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Duygu Sarikaya
- Department of Computer Engineering, Faculty of Engineering, Gazi University, Ankara, Turkey; LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Keno März
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Anand Malpani
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA
| | | | - Hubertus Feussner
- Department of Surgery, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Stamatia Giannarou
- The Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom
| | - Pietro Mascagni
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, Strasbourg, France
| | | | - Adrian Park
- Department of Surgery, Anne Arundel Health System, Annapolis, Maryland, USA; Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Carla Pugh
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Swaroop S Vedula
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Kevin Cleary
- The Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, D.C., USA
| | | | - Germain Forestier
- L'Institut de Recherche en Informatique, Mathématiques, Automatique et Signal (IRIMAS), University of Haute-Alsace, Mulhouse, France; Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
| | - Bernard Gibaud
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Teodor Grantcharov
- University of Toronto, Toronto, Ontario, Canada; The Li Ka Shing Knowledge Institute of St. Michael's Hospital, Toronto, Ontario, Canada
| | - Makoto Hashizume
- Kyushu University, Fukuoka, Japan; Kitakyushu Koga Hospital, Fukuoka, Japan
| | - Doreen Heckmann-Nötzel
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Hannes G Kenngott
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Ron Kikinis
- Department of Radiology, Brigham and Women's Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | | | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany; Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Sinan Onogur
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tobias Roß
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Russell H Taylor
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Minu D Tizabi
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Martin Wagner
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Gregory D Hager
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, Leipzig, Germany
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, Strasbourg, France
| | - Justin Collins
- Division of Surgery and Interventional Science, University College London, London, United Kingdom
| | - Ines Gockel
- Department of Visceral, Transplant, Thoracic and Vascular Surgery, Leipzig University Hospital, Leipzig, Germany
| | - Jan Goedeke
- Pediatric Surgery, Dr. von Hauner Children's Hospital, Ludwig-Maximilians-University, Munich, Germany
| | - Daniel A Hashimoto
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA; Surgical AI and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Luc Joyeux
- My FetUZ Fetal Research Center, Department of Development and Regeneration, Biomedical Sciences, KU Leuven, Leuven, Belgium; Center for Surgical Technologies, Faculty of Medicine, KU Leuven, Leuven, Belgium; Department of Obstetrics and Gynecology, Division Woman and Child, Fetal Medicine Unit, University Hospitals Leuven, Leuven, Belgium; Michael E. DeBakey Department of Surgery, Texas Children's Hospital and Baylor College of Medicine, Houston, Texas, USA
| | - Kyle Lam
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Daniel R Leff
- Department of BioSurgery and Surgical Technology, Imperial College London, London, United Kingdom; Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Breast Unit, Imperial Healthcare NHS Trust, London, United Kingdom
| | - Amin Madani
- Department of Surgery, University Health Network, Toronto, Ontario, Canada
| | - Hani J Marcus
- National Hospital for Neurology and Neurosurgery, and UCL Queen Square Institute of Neurology, London, United Kingdom
| | - Ozanan Meireles
- Massachusetts General Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | - Alexander Seitel
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Dogu Teber
- Department of Urology, City Hospital Karlsruhe, Karlsruhe, Germany
| | - Frank Ückert
- Institute for Applied Medical Informatics, Hamburg University Hospital, Hamburg, Germany
| | - Beat P Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Pierre Jannin
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC) Dresden, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| |
Collapse
|
70
|
Xu J, Zhang Q, Yu Y, Zhao R, Bian X, Liu X, Wang J, Ge Z, Qian D. Deep reconstruction-recoding network for unsupervised domain adaptation and multi-center generalization in colonoscopy polyp detection. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 214:106576. [PMID: 34915425 DOI: 10.1016/j.cmpb.2021.106576] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 11/16/2021] [Accepted: 12/02/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Currently, the best performing methods in colonoscopy polyp detection are primarily based on deep neural networks (DNNs), which are usually trained on large amounts of labeled data. However, different hospitals use different endoscope models and set different imaging parameters, which causes the collected endoscopic images and videos to vary greatly in style. There may be variations in the color space, brightness, contrast, and resolution, and there are also differences between white light endoscopy (WLE) and narrow band image endoscopy (NBIE). We call these variations the domain shift. The DNN performance may decrease when the training data and the testing data come from different hospitals or different endoscope models. Additionally, it is quite difficult to collect enough new labeled data and retrain a new DNN model before deploying that DNN to a new hospital or endoscope model. METHODS To solve this problem, we propose a domain adaptation model called Deep Reconstruction-Recoding Network (DRRN), which jointly learns a shared encoding representation for two tasks: i) a supervised object detection network for labeled source data, and ii) an unsupervised reconstruction-recoding network for unlabeled target data. Through the DRRN, the object detection network's encoder not only learns the features from the labeled source domain, but also encodes useful information from the unlabeled target domain. Therefore, the distribution difference of the two domains' feature spaces can be reduced. RESULTS We evaluate the performance of the DRRN on a series of cross-domain datasets. Compared with training the polyp detection network using only source data, the performance of the DRRN on the target domain is improved. Through feature statistics and visualization, it is demonstrated that the DRRN can learn the common distribution and feature invariance of the two domains. The distribution difference between the feature spaces of the two domains can be reduced. CONCLUSION The DRRN can improve cross-domain polyp detection. With the DRRN, the generalization performance of the DNN-based polyp detection model can be improved without additional labeled data. This improvement allows the polyp detection model to be easily transferred to datasets from different hospitals or different endoscope models.
Collapse
Affiliation(s)
- Jianwei Xu
- Deepwise Healthcare Joint Research Laboratory, Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Qingwei Zhang
- Division of Gastroenterology and Hepatology, Key Laboratory of Gastroenterology and Hepatology, Ministry of Health, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai Institute of Digestive Disease, Shanghai, China.
| | - Yizhou Yu
- Deepwise Artificial Intelligence Laboratory, Beijing, China
| | - Ran Zhao
- Division of Gastroenterology and Hepatology, Key Laboratory of Gastroenterology and Hepatology, Ministry of Health, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai Institute of Digestive Disease, Shanghai, China
| | - Xianzhang Bian
- Deepwise Healthcare Joint Research Laboratory, Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaoqing Liu
- Deepwise Artificial Intelligence Laboratory, Beijing, China
| | - Jun Wang
- Deepwise Healthcare Joint Research Laboratory, Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zhizheng Ge
- Division of Gastroenterology and Hepatology, Key Laboratory of Gastroenterology and Hepatology, Ministry of Health, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai Institute of Digestive Disease, Shanghai, China.
| | - Dahong Qian
- Deepwise Healthcare Joint Research Laboratory, Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
71
|
Wang D, Chen S, Sun X, Chen Q, Cao Y, Liu B, Liu X. AFP-Mask: Anchor-free Polyp Instance Segmentation in Colonoscopy. IEEE J Biomed Health Inform 2022; 26:2995-3006. [PMID: 35104234 DOI: 10.1109/jbhi.2022.3147686] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Colorectal cancer (CRC) is a common and lethal disease. Globally, CRC is the third most commonly diagnosed cancer in males and the second in females. The most effective way to prevent CRC is through using colonoscopy to identify and remove precancerous growths at an early stage. During colonoscopy, a tiny camera at the tip of the endoscope captures a video of the intestinal mucosa of the colon, while a specialized physician examines the lining of the entire colon and checks for any precancerous growths (polyps) through the live feed. The detection and removal of colorectal polyps have been found to be associated with a reduction in mortality from colorectal cancer. However, the false negative rate of polyp detection during colonoscopy is often high even for experienced physicians, due to the high variance in polyp shape, size, texture, color, and illumination, which make them difficult to detect. With recent advances in deep learning based object detection techniques, automated polyp detection shows great potential in helping physicians reduce false positive rate during colonoscopy. In this paper, we propose a novel anchor-free instance segmentation framework that can localize polyps and produce the corresponding instance level masks without using predefined anchor boxes. Our framework consists of two branches: (a) an object detection branch that performs classification and localization, (b) a mask generation branch that produces instance level masks. Instead of predicting a two-dimensional mask directly, we encode it into a compact representation vector, which allows us to incorporate instance segmentation with one-stage bounding-box detectors in a simple yet effective way. Moreover, our proposed encoding method can be trained jointly with object detector. Our experiment results show that our framework achieves a precision of 99.36% and a recall of 96.44% on public datasets, outperforming existing anchor-free instance segmentation methods by at least 2.8% in mIoU on our private dataset.
Collapse
|
72
|
Yoon D, Kong HJ, Kim BS, Cho WS, Lee JC, Cho M, Lim MH, Yang SY, Lim SH, Lee J, Song JH, Chung GE, Choi JM, Kang HY, Bae JH, Kim S. Colonoscopic image synthesis with generative adversarial network for enhanced detection of sessile serrated lesions using convolutional neural network. Sci Rep 2022; 12:261. [PMID: 34997124 PMCID: PMC8741803 DOI: 10.1038/s41598-021-04247-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 12/20/2021] [Indexed: 12/28/2022] Open
Abstract
Computer-aided detection (CADe) systems have been actively researched for polyp detection in colonoscopy. To be an effective system, it is important to detect additional polyps that may be easily missed by endoscopists. Sessile serrated lesions (SSLs) are a precursor to colorectal cancer with a relatively higher miss rate, owing to their flat and subtle morphology. Colonoscopy CADe systems could help endoscopists; however, the current systems exhibit a very low performance for detecting SSLs. We propose a polyp detection system that reflects the morphological characteristics of SSLs to detect unrecognized or easily missed polyps. To develop a well-trained system with imbalanced polyp data, a generative adversarial network (GAN) was used to synthesize high-resolution whole endoscopic images, including SSL. Quantitative and qualitative evaluations on GAN-synthesized images ensure that synthetic images are realistic and include SSL endoscopic features. Moreover, traditional augmentation methods were used to compare the efficacy of the GAN augmentation method. The CADe system augmented with GAN synthesized images showed a 17.5% improvement in sensitivity on SSLs. Consequently, we verified the potential of the GAN to synthesize high-resolution images with endoscopic features and the proposed system was found to be effective in detecting easily missed polyps during a colonoscopy.
Collapse
Affiliation(s)
- Dan Yoon
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, South Korea
| | - Hyoun-Joong Kong
- Transdisciplinary Department of Medicine and Advanced Technology, Seoul National University Hospital, Seoul, 03080, South Korea.,Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, 03080, South Korea.,Medical Big Data Research Center, Seoul National University College of Medicine, Seoul, 03080, South Korea.,Artificial Intelligence Institute, Seoul National University, Seoul, 08826, South Korea
| | - Byeong Soo Kim
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, South Korea
| | - Woo Sang Cho
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, 08826, South Korea
| | - Jung Chan Lee
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, 03080, South Korea.,Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul, 03080, South Korea.,Institute of Bioengineering, Seoul National University, Seoul, 08826, South Korea
| | - Minwoo Cho
- Transdisciplinary Department of Medicine and Advanced Technology, Seoul National University Hospital, Seoul, 03080, South Korea.,Biomedical Research Institute, Seoul National University Hospital, Seoul, 03080, South Korea
| | - Min Hyuk Lim
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, 03080, South Korea
| | - Sun Young Yang
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, 06236, South Korea
| | - Seon Hee Lim
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, 06236, South Korea
| | - Jooyoung Lee
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, 06236, South Korea
| | - Ji Hyun Song
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, 06236, South Korea
| | - Goh Eun Chung
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, 06236, South Korea
| | - Ji Min Choi
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, 06236, South Korea
| | - Hae Yeon Kang
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, 06236, South Korea
| | - Jung Ho Bae
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, 06236, South Korea.
| | - Sungwan Kim
- Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, 03080, South Korea. .,Artificial Intelligence Institute, Seoul National University, Seoul, 08826, South Korea. .,Institute of Bioengineering, Seoul National University, Seoul, 08826, South Korea.
| |
Collapse
|
73
|
Artificial Intelligence in Gastroenterology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-58080-3_163-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
74
|
Sánchez-Peralta LF, Pagador JB, Sánchez-Margallo FM. Artificial Intelligence for Colorectal Polyps in Colonoscopy. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
75
|
AIM in Endoscopy Procedures. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
76
|
Strümke I, Hicks SA, Thambawita V, Jha D, Parasa S, Riegler MA, Halvorsen P. Artificial Intelligence in Gastroenterology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
77
|
Medical Image Classification Based on Information Interaction Perception Mechanism. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:8429899. [PMID: 34912447 PMCID: PMC8668365 DOI: 10.1155/2021/8429899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 11/12/2021] [Indexed: 12/18/2022]
Abstract
Colorectal cancer originates from adenomatous polyps. Adenomatous polyps start out as benign, but over time they can become malignant and even lead to complications and death which will spread to adherent and surrounding organs over time, such as lymph nodes, liver, or lungs, eventually leading to complications and death. Factors such as operator's experience shortage and visual fatigue will directly affect the diagnostic accuracy of colonoscopy. To relieve the pressure on medical imaging personnel, this paper proposed a network model for colonic polyp detection using colonoscopy images. Considering the unnoticeable surface texture of colonic polyps, this paper designed a channel information interaction perception (CIIP) module. Based on this module, an information interaction perception network (IIP-Net) is proposed. In order to improve the accuracy of classification and reduce the cost of calculation, the network used three classifiers for classification: fully connected (FC) structure, global average pooling fully connected (GAP-FC) structure, and convolution global average pooling (C-GAP) structure. We evaluated the performance of IIP-Net by randomly selecting colonoscopy images from a gastroscopy database. The experimental results showed that the overall accuracy of IIP-NET54-GAP-FC module is 99.59%, and the accuracy of colonic polyp is 99.40%. By contrast, our IIP-NET54-GAP-FC performed extremely well.
Collapse
|
78
|
Detection of elusive polyps using a large-scale artificial intelligence system (with videos). Gastrointest Endosc 2021; 94:1099-1109.e10. [PMID: 34216598 DOI: 10.1016/j.gie.2021.06.021] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 06/22/2021] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND AIMS Colorectal cancer is a leading cause of death. Colonoscopy is the criterion standard for detection and removal of precancerous lesions and has been shown to reduce mortality. The polyp miss rate during colonoscopies is 22% to 28%. DEEP DEtection of Elusive Polyps (DEEP2) is a new polyp detection system based on deep learning that alerts the operator in real time to the presence and location of polyps. The primary outcome was the performance of DEEP2 on the detection of elusive polyps. METHODS The DEEP2 system was trained on 3611 hours of colonoscopy videos derived from 2 sources and was validated on a set comprising 1393 hours from a third unrelated source. Ground truth labeling was provided by offline gastroenterologist annotators who were able to watch the video in slow motion and pause and rewind as required. To assess applicability, stability, and user experience and to obtain some preliminary data on performance in a real-life scenario, a preliminary prospective clinical validation study was performed comprising 100 procedures. RESULTS DEEP2 achieved a sensitivity of 97.1% at 4.6 false alarms per video for all polyps and of 88.5% and 84.9% for polyps in the field of view for less than 5 and 2 seconds, respectively. DEEP2 was able to detect polyps not seen by live real-time endoscopists or offline annotators in an average of .22 polyps per sequence. In the clinical validation study, the system detected an average of .89 additional polyps per procedure. No adverse events occurred. CONCLUSIONS DEEP2 has a high sensitivity for polyp detection and was effective in increasing the detection of polyps both in colonoscopy videos and in real procedures with a low number of false alarms. (Clinical trial registration number: NCT04693078.).
Collapse
|
79
|
Li JW, Chia T, Fock KM, Chong KDW, Wong YJ, Ang TL. Artificial intelligence and polyp detection in colonoscopy: Use of a single neural network to achieve rapid polyp localization for clinical use. J Gastroenterol Hepatol 2021; 36:3298-3307. [PMID: 34327729 DOI: 10.1111/jgh.15642] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 05/11/2021] [Accepted: 07/22/2021] [Indexed: 02/05/2023]
Abstract
BACKGROUND AND AIM Artificial intelligence has been extensively studied to assist clinicians in polyp detection, but such systems usually require expansive processing power, making them prohibitively expensive and hindering wide adaption. The current study used a fast object detection algorithm, known as the YOLOv3 algorithm, to achieve real-time polyp detection on a laptop. In addition, we evaluated and classified the causes of false detections to further improve accuracy. METHODS The YOLOv3 algorithm was trained and validated with 6038 and 2571 polyp images, respectively. Videos from live colonoscopies in a tertiary center and those obtained from public databases were used for the training and validation sets. The algorithm was tested on 10 unseen videos from the CVC-Video ClinicDB dataset. Only bounding boxes with an intersection over union area of > 0.3 were considered positive predictions. RESULTS Polyp detection rate in our study was 100%, with the algorithm able to detect every polyp in each video. Sensitivity, specificity, and F1 score were 74.1%, 85.1%, and 83.3, respectively. The algorithm achieved a speed of 61.2 frames per second (fps) on a desktop RTX2070 GPU and 27.2 fps on a laptop GTX2060 GPU. Nearly a quarter of false negatives happened when the polyps were at the corner of an image. Image blurriness accounted for approximately 3% and 9% of false positive and false negative detections, respectively. CONCLUSION The YOLOv3 algorithm can achieve real-time poly detection with high accuracy and speed on a desktop GPU, making it low cost and accessible to most endoscopy centers worldwide.
Collapse
Affiliation(s)
- James Weiquan Li
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Medicine Academic Clinical Programme, SingHealth Duke-NUS, Singapore
| | | | - Kwong Ming Fock
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Medicine Academic Clinical Programme, SingHealth Duke-NUS, Singapore
| | | | - Yu Jun Wong
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore
- Medicine Academic Clinical Programme, SingHealth Duke-NUS, Singapore
| | - Tiing Leong Ang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore Health Services, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Medicine Academic Clinical Programme, SingHealth Duke-NUS, Singapore
| |
Collapse
|
80
|
Du W, Rao N, Yong J, Wang Y, Hu D, Gan T, Zhu L, Zeng B. Improving the Classification Performance of Esophageal Disease on Small Dataset by Semi-supervised Efficient Contrastive Learning. J Med Syst 2021; 46:4. [PMID: 34807297 DOI: 10.1007/s10916-021-01782-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 10/11/2021] [Indexed: 02/05/2023]
Abstract
The classification of esophageal disease based on gastroscopic images is important in the clinical treatment, and is also helpful in providing patients with follow-up treatment plans and preventing lesion deterioration. In recent years, deep learning has achieved many satisfactory results in gastroscopic image classification tasks. However, most of them need a training set that consists of large numbers of images labeled by experienced experts. To reduce the image annotation burdens and improve the classification ability on small labeled gastroscopic image datasets, this study proposed a novel semi-supervised efficient contrastive learning (SSECL) classification method for esophageal disease. First, an efficient contrastive pair generation (ECPG) module was proposed to generate efficient contrastive pairs (ECPs), which took advantage of the high similarity features of images from the same lesion. Then, an unsupervised visual feature representation containing the general feature of esophageal gastroscopic images is learned by unsupervised efficient contrastive learning (UECL). At last, the feature representation will be transferred to the down-stream esophageal disease classification task. The experimental results have demonstrated that the classification accuracy of SSECL is 92.57%, which is better than that of the other state-of-the-art semi-supervised methods and is also higher than the classification method based on transfer learning (TL) by 2.28%. Thus, SSECL has solved the challenging problem of improving the classification result on small gastroscopic image dataset by fully utilizing the unlabeled gastroscopic images and the high similarity information among images from the same lesion. It also brings new insights into medical image classification tasks.
Collapse
Affiliation(s)
- Wenju Du
- Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, 610054, China
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Nini Rao
- Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| | - Jiahao Yong
- Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, 610054, China
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Yingchun Wang
- Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, 610054, China
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Dingcan Hu
- Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, 610054, China
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Tao Gan
- Digestive Endoscopic Center of West China Hospital, Sichuan University, Chengdu, 610017, China.
| | - Linlin Zhu
- Digestive Endoscopic Center of West China Hospital, Sichuan University, Chengdu, 610017, China
| | - Bing Zeng
- School of Information and Communication Engineering, University Electronic Science and Technology of China, Chengdu, 610054, China
| |
Collapse
|
81
|
Pacal I, Karaman A, Karaboga D, Akay B, Basturk A, Nalbantoglu U, Coskun S. An efficient real-time colonic polyp detection with YOLO algorithms trained by using negative samples and large datasets. Comput Biol Med 2021; 141:105031. [PMID: 34802713 DOI: 10.1016/j.compbiomed.2021.105031] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 11/10/2021] [Accepted: 11/10/2021] [Indexed: 12/29/2022]
Abstract
Colorectal cancer (CRC) is one of the common types of cancer with a high mortality rate. Colonoscopy is the gold standard for CRC screening and significantly reduces CRC mortality. However, due to many factors, the rate of missed polyps, which are the precursors of colorectal cancer, is high in practice. Therefore, many artificial intelligence-based computer-aided diagnostic systems have been presented to increase the detection rate of missed polyps. In this article, we present deep learning-based methods for reliable computer-assisted polyp detection. The proposed methods differ from state-of-the-art methods as follows. First, we improved the performances of YOLOv3 and YOLOv4 object detection algorithms by integrating Cross Stage Partial Network (CSPNet) for real-time and high-performance automatic polyp detection. Then, we utilized advanced data augmentation techniques and transfer learning to improve the performance of polyp detection. Next, for further improving the performance of polyp detection using negative samples, we substituted the Sigmoid-weighted Linear Unit (SiLU) activation functions instead of the Leaky ReLU and Mish activation functions, and Complete Intersection over Union (CIoU) as the loss function. In addition, we present a comparative analysis of these activation functions for polyp detection. We applied the proposed methods on the recently published novel datasets, which are the SUN polyp database and the PICCOLO database. Additionally, we investigated the proposed models for MICCAI Sub-Challenge on Automatic Polyp Detection in Colonoscopy dataset. The proposed methods outperformed the other studies in both real-time performance and polyp detection accuracy.
Collapse
Affiliation(s)
- Ishak Pacal
- Computer Engineering Department, Engineering Faculty, Igdir University, Igdir, Turkey.
| | - Ahmet Karaman
- Gastroenterology Department, Acibadem Hospital, Kayseri, Turkey
| | - Dervis Karaboga
- Computer Engineering Department, Engineering Faculty, Erciyes University, Kayseri, Turkey
| | - Bahriye Akay
- Computer Engineering Department, Engineering Faculty, Erciyes University, Kayseri, Turkey
| | - Alper Basturk
- Computer Engineering Department, Engineering Faculty, Erciyes University, Kayseri, Turkey
| | - Ufuk Nalbantoglu
- Computer Engineering Department, Engineering Faculty, Erciyes University, Kayseri, Turkey
| | - Seymanur Coskun
- Gastroenterology Department, Acibadem Hospital, Kayseri, Turkey
| |
Collapse
|
82
|
Wang J, Jin Y, Cai S, Xu H, Heng PA, Qin J, Wang L. Real-time landmark detection for precise endoscopic submucosal dissection via shape-aware relation network. Med Image Anal 2021; 75:102291. [PMID: 34753019 DOI: 10.1016/j.media.2021.102291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 10/22/2021] [Accepted: 10/25/2021] [Indexed: 10/19/2022]
Abstract
We propose a novel shape-aware relation network for accurate and real-time landmark detection in endoscopic submucosal dissection (ESD) surgery. This task is of great clinical significance but extremely challenging due to bleeding, lighting reflection, and motion blur in the complicated surgical environment. Compared with existing solutions, which either neglect geometric relationships among targeting objects or capture the relationships by using complicated aggregation schemes, the proposed network is capable of achieving satisfactory accuracy while maintaining real-time performance by taking full advantage of the spatial relations among landmarks. We first devise an algorithm to automatically generate relation keypoint heatmaps, which are able to intuitively represent the prior knowledge of spatial relations among landmarks without using any extra manual annotation efforts. We then develop two complementary regularization schemes to progressively incorporate the prior knowledge into the training process. While one scheme introduces pixel-level regularization by multi-task learning, the other integrates global-level regularization by harnessing a newly designed grouped consistency evaluator, which adds relation constraints to the proposed network in an adversarial manner. Both schemes are beneficial to the model in training, and can be readily unloaded in inference to achieve real-time detection. We establish a large in-house dataset of ESD surgery for esophageal cancer to validate the effectiveness of our proposed method. Extensive experimental results demonstrate that our approach outperforms state-of-the-art methods in terms of accuracy and efficiency, achieving better detection results faster. Promising results on two downstream applications further corroborate the great potential of our method in ESD clinical practice.
Collapse
Affiliation(s)
- Jiacheng Wang
- Department of Computer Science at School of Informatics, Xiamen University, Xiamen 361005, China
| | - Yueming Jin
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, China
| | - Shuntian Cai
- Department of Gastroenterology, Zhongshan Hospital affiliated to Xiamen University, Xiamen, China
| | - Hongzhi Xu
- Department of Gastroenterology, Zhongshan Hospital affiliated to Xiamen University, Xiamen, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, China
| | - Jing Qin
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| | - Liansheng Wang
- Department of Computer Science at School of Informatics, Xiamen University, Xiamen 361005, China.
| |
Collapse
|
83
|
Kröner PT, Engels MML, Glicksberg BS, Johnson KW, Mzaik O, van Hooft JE, Wallace MB, El-Serag HB, Krittanawong C. Artificial intelligence in gastroenterology: A state-of-the-art review. World J Gastroenterol 2021; 27:6794-6824. [PMID: 34790008 PMCID: PMC8567482 DOI: 10.3748/wjg.v27.i40.6794] [Citation(s) in RCA: 73] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 06/15/2021] [Accepted: 09/16/2021] [Indexed: 02/06/2023] Open
Abstract
The development of artificial intelligence (AI) has increased dramatically in the last 20 years, with clinical applications progressively being explored for most of the medical specialties. The field of gastroenterology and hepatology, substantially reliant on vast amounts of imaging studies, is not an exception. The clinical applications of AI systems in this field include the identification of premalignant or malignant lesions (e.g., identification of dysplasia or esophageal adenocarcinoma in Barrett’s esophagus, pancreatic malignancies), detection of lesions (e.g., polyp identification and classification, small-bowel bleeding lesion on capsule endoscopy, pancreatic cystic lesions), development of objective scoring systems for risk stratification, predicting disease prognosis or treatment response [e.g., determining survival in patients post-resection of hepatocellular carcinoma), determining which patients with inflammatory bowel disease (IBD) will benefit from biologic therapy], or evaluation of metrics such as bowel preparation score or quality of endoscopic examination. The objective of this comprehensive review is to analyze the available AI-related studies pertaining to the entirety of the gastrointestinal tract, including the upper, middle and lower tracts; IBD; the hepatobiliary system; and the pancreas, discussing the findings and clinical applications, as well as outlining the current limitations and future directions in this field.
Collapse
Affiliation(s)
- Paul T Kröner
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
| | - Megan ML Engels
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
- Cancer Center Amsterdam, Department of Gastroenterology and Hepatology, Amsterdam UMC, Location AMC, Amsterdam 1105, The Netherlands
| | - Benjamin S Glicksberg
- The Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Kipp W Johnson
- The Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Obaie Mzaik
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
| | - Jeanin E van Hooft
- Department of Gastroenterology and Hepatology, Leiden University Medical Center, Amsterdam 2300, The Netherlands
| | - Michael B Wallace
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
- Division of Gastroenterology and Hepatology, Sheikh Shakhbout Medical City, Abu Dhabi 11001, United Arab Emirates
| | - Hashem B El-Serag
- Section of Gastroenterology and Hepatology, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
- Section of Health Services Research, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
| | - Chayakrit Krittanawong
- Section of Health Services Research, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
- Section of Cardiology, Michael E. DeBakey VA Medical Center, Houston, TX 77030, United States
| |
Collapse
|
84
|
Viscaino M, Torres Bustos J, Muñoz P, Auat Cheein C, Cheein FA. Artificial intelligence for the early detection of colorectal cancer: A comprehensive review of its advantages and misconceptions. World J Gastroenterol 2021; 27:6399-6414. [PMID: 34720530 PMCID: PMC8517786 DOI: 10.3748/wjg.v27.i38.6399] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 04/26/2021] [Accepted: 09/14/2021] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer (CRC) was the second-ranked worldwide type of cancer during 2020 due to the crude mortality rate of 12.0 per 100000 inhabitants. It can be prevented if glandular tissue (adenomatous polyps) is detected early. Colonoscopy has been strongly recommended as a screening test for both early cancer and adenomatous polyps. However, it has some limitations that include the high polyp miss rate for smaller (< 10 mm) or flat polyps, which are easily missed during visual inspection. Due to the rapid advancement of technology, artificial intelligence (AI) has been a thriving area in different fields, including medicine. Particularly, in gastroenterology AI software has been included in computer-aided systems for diagnosis and to improve the assertiveness of automatic polyp detection and its classification as a preventive method for CRC. This article provides an overview of recent research focusing on AI tools and their applications in the early detection of CRC and adenomatous polyps, as well as an insightful analysis of the main advantages and misconceptions in the field.
Collapse
Affiliation(s)
- Michelle Viscaino
- Department of Electronic Engineering, Universidad Tecnica Federico Santa Maria, Valpaiso 2340000, Chile
| | - Javier Torres Bustos
- Department of Electronic Engineering, Universidad Tecnica Federico Santa Maria, Valpaiso 2340000, Chile
| | - Pablo Muñoz
- Hospital Clinico, University of Chile, Santiago 8380456, Chile
| | - Cecilia Auat Cheein
- Facultad de Medicina, Universidad Nacional de Santiago del Estero, Santiago del Estero 4200, Argentina
| | - Fernando Auat Cheein
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaiso 2340000, Chile
| |
Collapse
|
85
|
Automated Bowel Polyp Detection Based on Actively Controlled Capsule Endoscopy: Feasibility Study. Diagnostics (Basel) 2021; 11:diagnostics11101878. [PMID: 34679575 PMCID: PMC8535114 DOI: 10.3390/diagnostics11101878] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 10/06/2021] [Accepted: 10/09/2021] [Indexed: 01/10/2023] Open
Abstract
This paper presents an active locomotion capsule endoscope system with 5D position sensing and real-time automated polyp detection for small-bowel and colon applications. An electromagnetic actuation system (EMA) consisting of stationary electromagnets is utilized to remotely control a magnetic capsule endoscope with multi-degree-of-freedom locomotion. For position sensing, an electronic system using a magnetic sensor array is built to track the position and orientation of the magnetic capsule during movement. The system is integrated with a deep learning model, named YOLOv3, which can automatically identify colorectal polyps in real-time with an average precision of 85%. The feasibility of the proposed method concerning active locomotion and localization is validated and demonstrated through in vitro experiments in a phantom duodenum. This study provides a high-potential solution for automatic diagnostics of the bowel and colon using an active locomotion capsule endoscope, which can be applied for a clinical site in the future.
Collapse
|
86
|
Hann A, Troya J, Fitting D. Current status and limitations of artificial intelligence in colonoscopy. United European Gastroenterol J 2021; 9:527-533. [PMID: 34617420 PMCID: PMC8259277 DOI: 10.1002/ueg2.12108] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Accepted: 02/28/2021] [Indexed: 12/11/2022] Open
Abstract
BACKGROUND Artificial intelligence (AI) using deep learning methods for polyp detection (CADe) and characterization (CADx) is on the verge of clinical application. CADe already implied its potential use in randomized controlled trials. Further efforts are needed to take CADx to the next level of development. AIM This work aims to give an overview of the current status of AI in colonoscopy, without going into too much technical detail. METHODS A literature search to identify important studies exploring the use of AI in colonoscopy was performed. RESULTS This review focuses on AI performance in screening colonoscopy summarizing the first prospective trials for CADe, the state of research in CADx as well as current limitations of those systems and legal issues.
Collapse
Affiliation(s)
- Alexander Hann
- Department of Internal Medicine II, Interventional and Experimental Endoscopy (InExEn), University Hospital Wuerzburg, Würzburg, Germany
| | - Joel Troya
- Department of Internal Medicine II, Interventional and Experimental Endoscopy (InExEn), University Hospital Wuerzburg, Würzburg, Germany
| | - Daniel Fitting
- Department of Internal Medicine II, Interventional and Experimental Endoscopy (InExEn), University Hospital Wuerzburg, Würzburg, Germany
| |
Collapse
|
87
|
Hsu CM, Hsu CC, Hsu ZM, Shih FY, Chang ML, Chen TH. Colorectal Polyp Image Detection and Classification through Grayscale Images and Deep Learning. SENSORS 2021; 21:s21185995. [PMID: 34577209 PMCID: PMC8470682 DOI: 10.3390/s21185995] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 08/29/2021] [Accepted: 08/30/2021] [Indexed: 01/10/2023]
Abstract
Colonoscopy screening and colonoscopic polypectomy can decrease the incidence and mortality rate of colorectal cancer (CRC). The adenoma detection rate and accuracy of diagnosis of colorectal polyp which vary in different experienced endoscopists have impact on the colonoscopy protection effect of CRC. The work proposed a colorectal polyp image detection and classification system through grayscale images and deep learning. The system collected the data of CVC-Clinic and 1000 colorectal polyp images of Linkou Chang Gung Medical Hospital. The red-green-blue (RGB) images were transformed to 0 to 255 grayscale images. Polyp detection and classification were performed by convolutional neural network (CNN) model. Data for polyp detection was divided into five groups and tested by 5-fold validation. The accuracy of polyp detection was 95.1% for grayscale images which is higher than 94.1% for RGB and narrow-band images. The diagnostic accuracy, precision and recall rates were 82.8%, 82.5% and 95.2% for narrow-band images, respectively. The experimental results show that grayscale images achieve an equivalent or even higher accuracy of polyp detection than RGB images for lightweight computation. It is also found that the accuracy of polyp detection and classification is dramatically decrease when the size of polyp images small than 1600 pixels. It is recommended that clinicians could adjust the distance between the lens and polyps appropriately to enhance the system performance when conducting computer-assisted colorectal polyp analysis.
Collapse
Affiliation(s)
- Chen-Ming Hsu
- Department of Gastroenterology and Hepatology, Linkou Chang Gung Memorial Hospital and Chang Gung University College of Medicine, No. 5, Fuxing St., Guishan Dist., Taoyuan City 333, Taiwan; (C.-M.H.); (T.-H.C.)
| | - Chien-Chang Hsu
- Department of Computer Science and Information Engineering, Fu-Jen Catholic University, 510 Chung Cheng Rd., Hsinchuang Dist., New Taipei City 242, Taiwan; (Z.-M.H.); (F.-Y.S.)
- Graduate Institute of Applied Science and Engineering, Fu-Jen Catholic University, 510 Chung Cheng Rd., Hsinchuang Dist., New Taipei City 242, Taiwan;
- Correspondence:
| | - Zhe-Ming Hsu
- Department of Computer Science and Information Engineering, Fu-Jen Catholic University, 510 Chung Cheng Rd., Hsinchuang Dist., New Taipei City 242, Taiwan; (Z.-M.H.); (F.-Y.S.)
| | - Feng-Yu Shih
- Department of Computer Science and Information Engineering, Fu-Jen Catholic University, 510 Chung Cheng Rd., Hsinchuang Dist., New Taipei City 242, Taiwan; (Z.-M.H.); (F.-Y.S.)
| | - Meng-Lin Chang
- Graduate Institute of Applied Science and Engineering, Fu-Jen Catholic University, 510 Chung Cheng Rd., Hsinchuang Dist., New Taipei City 242, Taiwan;
| | - Tsung-Hsing Chen
- Department of Gastroenterology and Hepatology, Linkou Chang Gung Memorial Hospital and Chang Gung University College of Medicine, No. 5, Fuxing St., Guishan Dist., Taoyuan City 333, Taiwan; (C.-M.H.); (T.-H.C.)
| |
Collapse
|
88
|
Ahmad OF, Mori Y, Misawa M, Kudo SE, Anderson JT, Bernal J, Berzin TM, Bisschops R, Byrne MF, Chen PJ, East JE, Eelbode T, Elson DS, Gurudu SR, Histace A, Karnes WE, Repici A, Singh R, Valdastri P, Wallace MB, Wang P, Stoyanov D, Lovat LB. Establishing key research questions for the implementation of artificial intelligence in colonoscopy: a modified Delphi method. Endoscopy 2021; 53:893-901. [PMID: 33167043 PMCID: PMC8390295 DOI: 10.1055/a-1306-7590] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 11/09/2020] [Indexed: 02/08/2023]
Abstract
BACKGROUND : Artificial intelligence (AI) research in colonoscopy is progressing rapidly but widespread clinical implementation is not yet a reality. We aimed to identify the top implementation research priorities. METHODS : An established modified Delphi approach for research priority setting was used. Fifteen international experts, including endoscopists and translational computer scientists/engineers, from nine countries participated in an online survey over 9 months. Questions related to AI implementation in colonoscopy were generated as a long-list in the first round, and then scored in two subsequent rounds to identify the top 10 research questions. RESULTS : The top 10 ranked questions were categorized into five themes. Theme 1: clinical trial design/end points (4 questions), related to optimum trial designs for polyp detection and characterization, determining the optimal end points for evaluation of AI, and demonstrating impact on interval cancer rates. Theme 2: technological developments (3 questions), including improving detection of more challenging and advanced lesions, reduction of false-positive rates, and minimizing latency. Theme 3: clinical adoption/integration (1 question), concerning the effective combination of detection and characterization into one workflow. Theme 4: data access/annotation (1 question), concerning more efficient or automated data annotation methods to reduce the burden on human experts. Theme 5: regulatory approval (1 question), related to making regulatory approval processes more efficient. CONCLUSIONS : This is the first reported international research priority setting exercise for AI in colonoscopy. The study findings should be used as a framework to guide future research with key stakeholders to accelerate the clinical implementation of AI in endoscopy.
Collapse
Affiliation(s)
- Omer F. Ahmad
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
- Clinical Effectiveness Research Group, Institute of Health and Society, University of Oslo, Oslo, Norway
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Shin-ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - John T. Anderson
- Department of Gastroenterology, Gloucestershire Hospitals NHS Foundation Trust, Gloucester, UK
| | - Jorge Bernal
- Computer Science Department, Universitat Autonoma de Barcelona and Computer Vision Center, Barcelona, Spain
| | - Tyler M. Berzin
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - Raf Bisschops
- Department of Gastroenterology and Hepatology, University Hospitals Leuven, TARGID KU Leuven, Leuven, Belgium
| | - Michael F. Byrne
- Division of Gastroenterology, Vancouver General Hospital, University of British Columbia, Vancouver, British Columbia, Canada
| | - Peng-Jen Chen
- Division of Gastroenterology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - James E. East
- Translational Gastroenterology Unit, John Radcliffe Hospital, Oxford, UK
- Oxford NIHR Biomedical Research Centre, University of Oxford, Oxford, UK
| | - Tom Eelbode
- Medical Imaging Research Center, ESAT/PSI, KU Leuven, Leuven, Belgium
| | - Daniel S. Elson
- Hamlyn Centre for Robotic Surgery, Institute of Global Health Innovation, Imperial College London, London, UK
- Department of Surgery and Cancer, Imperial College London, London, UK
| | - Suryakanth R. Gurudu
- Division of Gastroenterology and Hepatology, Mayo Clinic, Scottsdale, Arizona, USA
| | - Aymeric Histace
- ETIS, Universite de Cergy-Pointoise, ENSEA, CNRS, Cergy-Pointoise Cedex, France
| | - William E. Karnes
- H. H. Chao Comprehensive Digestive Disease Center, Division of Gastroenterology & Hepatology, Department of Medicine, University of California, Irvine, California, USA
| | - Alessandro Repici
- Department of Gastroenterology, Humanitas Clinical and Research Center, IRCCS, Rozzano, Milan, Italy
- Humanitas University, Department of Biomedical Sciences, Pieve Emanuele, Milan, Italy
| | - Rajvinder Singh
- Department of Gastroenterology and Hepatology, Lyell McEwan Hospital, Adelaide, South Australia, Australia
| | - Pietro Valdastri
- School of Electronics and Electrical Engineering, University of Leeds, Leeds, UK
| | - Michael B. Wallace
- Division of Gastroenterology & Hepatology, Mayo Clinic, Jacksonville, Florida, USA
| | - Pu Wang
- Department of Gastroenterology, Sichuan Academy of Medical Sciences and Sichuan Provincial People’s Hospital, Chengdu, China
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK
| | - Laurence B. Lovat
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK
- Gastrointestinal Services, University College London Hospital, London, UK
| |
Collapse
|
89
|
Chen BL, Wan JJ, Chen TY, Yu YT, Ji M. A self-attention based faster R-CNN for polyp detection from colonoscopy images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
|
90
|
Zhao SB, Yang W, Wang SL, Pan P, Wang RD, Chang X, Sun ZQ, Fu XH, Shang H, Wu JR, Chen LZ, Chang J, Song P, Miao YL, He SX, Miao L, Jiang HQ, Wang W, Yang X, Dong YH, Lin H, Chen Y, Gao J, Meng QQ, Jin ZD, Li ZS, Bai Y. Establishment and validation of a computer-assisted colonic polyp localization system based on deep learning. World J Gastroenterol 2021; 27:5232-5246. [PMID: 34497447 PMCID: PMC8384745 DOI: 10.3748/wjg.v27.i31.5232] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 04/10/2021] [Accepted: 07/20/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Artificial intelligence in colonoscopy is an emerging field, and its application may help colonoscopists improve inspection quality and reduce the rate of missed polyps and adenomas. Several deep learning-based computer-assisted detection (CADe) techniques were established from small single-center datasets, and unrepresentative learning materials might confine their application and generalization in wide practice. Although CADes have been reported to identify polyps in colonoscopic images and videos in real time, their diagnostic performance deserves to be further validated in clinical practice. AIM To train and test a CADe based on multicenter high-quality images of polyps and preliminarily validate it in clinical colonoscopies. METHODS With high-quality screening and labeling from 55 qualified colonoscopists, a dataset consisting of over 71000 images from 20 centers was used to train and test a deep learning-based CADe. In addition, the real-time diagnostic performance of CADe was tested frame by frame in 47 unaltered full-ranged videos that contained 86 histologically confirmed polyps. Finally, we conducted a self-controlled observational study to validate the diagnostic performance of CADe in real-world colonoscopy with the main outcome measure of polyps per colonoscopy in Changhai Hospital. RESULTS The CADe was able to identify polyps in the test dataset with 95.0% sensitivity and 99.1% specificity. For colonoscopy videos, all 86 polyps were detected with 92.2% sensitivity and 93.6% specificity in frame-by-frame analysis. In the prospective validation, the sensitivity of CAD in identifying polyps was 98.4% (185/188). Folds, reflections of light and fecal fluid were the main causes of false positives in both the test dataset and clinical colonoscopies. Colonoscopists can detect more polyps (0.90 vs 0.82, P < 0.001) and adenomas (0.32 vs 0.30, P = 0.045) with the aid of CADe, particularly polyps < 5 mm and flat polyps (0.65 vs 0.57, P < 0.001; 0.74 vs 0.67, P = 0.001, respectively). However, high efficacy is not realized in colonoscopies with inadequate bowel preparation and withdrawal time (P = 0.32; P = 0.16, respectively). CONCLUSION CADe is feasible in the clinical setting and might help endoscopists detect more polyps and adenomas, and further confirmation is warranted.
Collapse
Affiliation(s)
- Sheng-Bing Zhao
- Changhai Hospital, Second Military Medical University/Naval Medical University, Shanghai 200433, China
| | - Wei Yang
- Tencent AI Lab, National Open Innovation Platform for Next Generation Artificial Intelligence on Medical Imaging, Shenzhen 518063, Guangdong Province, China
| | - Shu-Ling Wang
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University/Naval Medical University, Shanghai 200433, China
| | - Peng Pan
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University/Naval Medical University, Shanghai 200433, China
| | - Run-Dong Wang
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University/Naval Medical University, Shanghai 200433, China
| | - Xin Chang
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University/Naval Medical University, Shanghai 200433, China
| | - Zhong-Qian Sun
- Tencent AI Lab, National Open Innovation Platform for Next Generation Artificial Intelligence on Medical Imaging, Shenzhen 518063, Guangdong Province, China
| | - Xing-Hui Fu
- Tencent AI Lab, National Open Innovation Platform for Next Generation Artificial Intelligence on Medical Imaging, Shenzhen 518063, Guangdong Province, China
| | - Hong Shang
- Tencent AI Lab, National Open Innovation Platform for Next Generation Artificial Intelligence on Medical Imaging, Shenzhen 518063, Guangdong Province, China
| | - Jian-Rong Wu
- Tencent Healthcare (Shenzhen) Co. LTD., Shenzhen 518063, Guangdong Province, China
| | - Li-Zhu Chen
- Tencent Healthcare (Shenzhen) Co. LTD., Shenzhen 518063, Guangdong Province, China
| | - Jia Chang
- Tencent Healthcare (Shenzhen) Co. LTD., Shenzhen 518063, Guangdong Province, China
| | - Pu Song
- Tencent Healthcare (Shenzhen) Co. LTD., Shenzhen 518063, Guangdong Province, China
| | - Ying-Lei Miao
- Department of Gastroenterology, The First Affiliated Hospital of Kunming Medical University, Kunming 650000, Yunnan Province, China
| | - Shui-Xiang He
- Department of Gastroenterology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an 710061, Shaanxi Province, China
| | - Lin Miao
- Institute of Digestive Endoscopy and Medical Center for Digestive Disease, The Second Affiliated Hospital of Nanjing Medical University, Nanjing 210011, Jiangsu Province, China
| | - Hui-Qing Jiang
- Department of Gastroenterology, The Second Hospital of Hebei Medical University, Hebei Key Laboratory of Gastroenterology, Hebei Institute of Gastroenterology, Shijiazhuang 050000, Hebei Province, China
| | - Wen Wang
- Department of Gastroenterology, 900th Hospital of Joint Logistics Support Force, Fuzhou 350025, Fujian Province, China
| | - Xia Yang
- Department of Gastroenterology, No. 905 Hospital of The Chinese People's Liberation Army, Shanghai 200050, China
| | - Yuan-Hang Dong
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University/Naval Medical University, Shanghai 200433, China
| | - Han Lin
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University/Naval Medical University, Shanghai 200433, China
| | - Yan Chen
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University/Naval Medical University, Shanghai 200433, China
| | - Jie Gao
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University/Naval Medical University, Shanghai 200433, China
| | - Qian-Qian Meng
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University/Naval Medical University, Shanghai 200433, China
| | - Zhen-Dong Jin
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University/Naval Medical University, Shanghai 200433, China
| | - Zhao-Shen Li
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University/Naval Medical University, Shanghai 200433, China
| | - Yu Bai
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University/Naval Medical University, Shanghai 200433, China
| |
Collapse
|
91
|
Li K, Fathan MI, Patel K, Zhang T, Zhong C, Bansal A, Rastogi A, Wang JS, Wang G. Colonoscopy polyp detection and classification: Dataset creation and comparative evaluations. PLoS One 2021; 16:e0255809. [PMID: 34403452 PMCID: PMC8370621 DOI: 10.1371/journal.pone.0255809] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 07/25/2021] [Indexed: 12/12/2022] Open
Abstract
Colorectal cancer (CRC) is one of the most common types of cancer with a high mortality rate. Colonoscopy is the preferred procedure for CRC screening and has proven to be effective in reducing CRC mortality. Thus, a reliable computer-aided polyp detection and classification system can significantly increase the effectiveness of colonoscopy. In this paper, we create an endoscopic dataset collected from various sources and annotate the ground truth of polyp location and classification results with the help of experienced gastroenterologists. The dataset can serve as a benchmark platform to train and evaluate the machine learning models for polyp classification. We have also compared the performance of eight state-of-the-art deep learning-based object detection models. The results demonstrate that deep CNN models are promising in CRC screening. This work can serve as a baseline for future research in polyp detection and classification.
Collapse
Affiliation(s)
- Kaidong Li
- Department of Electrical Engineering and Computer Science, The University of Kansas, Lawrence, KS, United States of America
| | - Mohammad I. Fathan
- Department of Electrical Engineering and Computer Science, The University of Kansas, Lawrence, KS, United States of America
| | - Krushi Patel
- Department of Electrical Engineering and Computer Science, The University of Kansas, Lawrence, KS, United States of America
| | - Tianxiao Zhang
- Department of Electrical Engineering and Computer Science, The University of Kansas, Lawrence, KS, United States of America
| | - Cuncong Zhong
- Department of Electrical Engineering and Computer Science, The University of Kansas, Lawrence, KS, United States of America
| | - Ajay Bansal
- Gastroenterology, Hepatology and Motility, The University of Kansas Medical Center, Kansas City, KS, United States of America
| | - Amit Rastogi
- Gastroenterology, Hepatology and Motility, The University of Kansas Medical Center, Kansas City, KS, United States of America
| | - Jean S. Wang
- Department of Medicine, Washington University School of Medicine, Saint Louis, MO, United States of America
| | - Guanghui Wang
- Department of Computer Science, Ryerson University, Toronto, ON, Canada
| |
Collapse
|
92
|
Li H, Vong CM, Wong PK, Ip WF, Yan T, Choi IC, Yu HH. A multi-feature fusion method for image recognition of gastrointestinal metaplasia (GIM). Biomed Signal Process Control 2021; 69:102909. [DOI: 10.1016/j.bspc.2021.102909] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
93
|
Sziová B, Nagy S, Fazekas Z. Application of Structural Entropy and Spatial Filling Factor in Colonoscopy Image Classification. ENTROPY 2021; 23:e23080936. [PMID: 34441076 PMCID: PMC8392869 DOI: 10.3390/e23080936] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 07/15/2021] [Accepted: 07/18/2021] [Indexed: 12/15/2022]
Abstract
For finding colorectal polyps the standard method relies on the techniques and devices of colonoscopy and the medical expertise of the gastroenterologist. In case of images acquired through colonoscopes the automatic segmentation of the polyps from their environment (i.e., from the bowel wall) is an essential task within computer aided diagnosis system development. As the number of the publicly available polyp images in various databases is still rather limited, it is important to develop metaheuristic methods, such as fuzzy inference methods, along with the deep learning algorithms to improve and validate detection and classification techniques. In the present manuscript firstly a fuzzy rule set is generated and validated. The former process is based on a statistical approach and makes use of histograms of the antecedents. Secondly, a method for selecting relevant antecedent variables is presented. The selection is based on the comparision of the histograms computed from the measured values for the training set. Then the inclusion of the Rényi-entropy-based structural entropy and the spatial filling factor into the set of input variables is proposed and assessed. The beneficial effect of including the mentioned structural entropy of the entropies from the hue and saturation (H and S) colour channels resulted in 65% true positive and 60% true negative rate of the classification for an advantageously selected set of antecedents when working with HSV images.
Collapse
Affiliation(s)
- Brigita Sziová
- Department of Computer Science, Széchenyi István University, Egyetem tér 1, H-9026 Gyor, Hungary
- Correspondence: ; Tel.: +36-96-613-617
| | - Szilvia Nagy
- Department of Telecommunications, Széchenyi István University, Egyetem tér 1, H-9026 Gyor, Hungary;
| | - Zoltán Fazekas
- Institute for Computer Science and Control (SZTAKI), Eötvös Loránd Research Network (ELKH), 13-17 Kende utca, H-1111 Budapest, Hungary;
| |
Collapse
|
94
|
Rahim T, Hassan SA, Shin SY. A deep convolutional neural network for the detection of polyps in colonoscopy images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102654] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
95
|
Liew WS, Tang TB, Lin CH, Lu CK. Automatic colonic polyp detection using integration of modified deep residual convolutional neural network and ensemble learning approaches. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 206:106114. [PMID: 33984661 DOI: 10.1016/j.cmpb.2021.106114] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 04/07/2021] [Indexed: 05/10/2023]
Abstract
BACKGROUND AND OBJECTIVE The increased incidence of colorectal cancer (CRC) and its mortality rate have attracted interest in the use of artificial intelligence (AI) based computer-aided diagnosis (CAD) tools to detect polyps at an early stage. Although these CAD tools have thus far achieved a good accuracy level to detect polyps, they still have room to improve further (e.g. sensitivity). Therefore, a new CAD tool is developed in this study to detect colonic polyps accurately. METHODS In this paper, we propose a novel approach to distinguish colonic polyps by integrating several techniques, including a modified deep residual network, principal component analysis and AdaBoost ensemble learning. A powerful deep residual network architecture, ResNet-50, was investigated to reduce the computational time by altering its architecture. To keep the interference to a minimum, median filter, image thresholding, contrast enhancement, and normalisation techniques were exploited on the endoscopic images to train the classification model. Three publicly available datasets, i.e., Kvasir, ETIS-LaribPolypDB, and CVC-ClinicDB, were merged to train the model, which included images with and without polyps. RESULTS The proposed approach trained with a combination of three datasets achieved Matthews Correlation Coefficient (MCC) of 0.9819 with accuracy, sensitivity, precision, and specificity of 99.10%, 98.82%, 99.37%, and 99.38%, respectively. CONCLUSIONS These results show that our method could repeatedly classify endoscopic images automatically and could be used to effectively develop computer-aided diagnostic tools for early CRC detection.
Collapse
Affiliation(s)
- Win Sheng Liew
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia
| | - Tong Boon Tang
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia
| | - Cheng-Hung Lin
- Department of Electrical Engineering and Biomedical Engineering Research Center, Yuan Ze University, Jungli 32003, Taiwan
| | - Cheng-Kai Lu
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia.
| |
Collapse
|
96
|
Lazăr DC, Avram MF, Faur AC, Romoşan I, Goldiş A. The role of computer-assisted systems for upper-endoscopy quality monitoring and assessment of gastric lesions. Gastroenterol Rep (Oxf) 2021; 9:185-204. [PMID: 34316369 PMCID: PMC8309682 DOI: 10.1093/gastro/goab008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Revised: 12/05/2020] [Accepted: 12/20/2020] [Indexed: 12/24/2022] Open
Abstract
This article analyses the literature regarding the value of computer-assisted systems in esogastroduodenoscopy-quality monitoring and the assessment of gastric lesions. Current data show promising results in upper-endoscopy quality control and a satisfactory detection accuracy of gastric premalignant and malignant lesions, similar or even exceeding that of experienced endoscopists. Moreover, artificial systems enable the decision for the best treatment strategies in gastric-cancer patient care, namely endoscopic vs surgical resection according to tumor depth. In so doing, unnecessary surgical interventions would be avoided whilst providing a better quality of life and prognosis for these patients. All these performance data have been revealed by numerous studies using different artificial intelligence (AI) algorithms in addition to white-light endoscopy or novel endoscopic techniques that are available in expert endoscopy centers. It is expected that ongoing clinical trials involving AI and the embedding of computer-assisted diagnosis systems into endoscopic devices will enable real-life implementation of AI endoscopic systems in the near future and at the same time will help to overcome the current limits of the computer-assisted systems leading to an improvement in performance. These benefits should lead to better diagnostic and treatment strategies for gastric-cancer patients. Furthermore, the incorporation of AI algorithms in endoscopic tools along with the development of large electronic databases containing endoscopic images might help in upper-endoscopy assistance and could be used for telemedicine purposes and second opinion for difficult cases.
Collapse
Affiliation(s)
- Daniela Cornelia Lazăr
- Department V of Internal Medicine I, Discipline of Internal Medicine IV, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania,Timișoara, Romania
| | - Mihaela Flavia Avram
- Department of Surgery X, 1st Surgery Discipline, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Timișoara, Romania
| | - Alexandra Corina Faur
- Department I, Discipline of Anatomy and Embriology, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Timișoara, Romania
| | - Ioan Romoşan
- Department V of Internal Medicine I, Discipline of Internal Medicine IV, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania,Timișoara, Romania
| | - Adrian Goldiş
- Department VII of Internal Medicine II, Discipline of Gastroenterology and Hepatology, “Victor Babeș” University of Medicine and Pharmacy Timișoara, Romania, Timișoara, Romania
| |
Collapse
|
97
|
Jha D, Smedsrud PH, Johansen D, de Lange T, Johansen HD, Halvorsen P, Riegler MA. A Comprehensive Study on Colorectal Polyp Segmentation With ResUNet++, Conditional Random Field and Test-Time Augmentation. IEEE J Biomed Health Inform 2021; 25:2029-2040. [PMID: 33400658 DOI: 10.1109/jbhi.2021.3049304] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Colonoscopy is considered the gold standard for detection of colorectal cancer and its precursors. Existing examination methods are, however, hampered by high overall miss-rate, and many abnormalities are left undetected. Computer-Aided Diagnosis systems based on advanced machine learning algorithms are touted as a game-changer that can identify regions in the colon overlooked by the physicians during endoscopic examinations, and help detect and characterize lesions. In previous work, we have proposed the ResUNet++ architecture and demonstrated that it produces more efficient results compared with its counterparts U-Net and ResUNet. In this paper, we demonstrate that further improvements to the overall prediction performance of the ResUNet++ architecture can be achieved by using Conditional Random Field (CRF) and Test-Time Augmentation (TTA). We have performed extensive evaluations and validated the improvements using six publicly available datasets: Kvasir-SEG, CVC-ClinicDB, CVC-ColonDB, ETIS-Larib Polyp DB, ASU-Mayo Clinic Colonoscopy Video Database, and CVC-VideoClinicDB. Moreover, we compare our proposed architecture and resulting model with other state-of-the-art methods. To explore the generalization capability of ResUNet++ on different publicly available polyp datasets, so that it could be used in a real-world setting, we performed an extensive cross-dataset evaluation. The experimental results show that applying CRF and TTA improves the performance on various polyp segmentation datasets both on the same dataset and cross-dataset. To check the model's performance on difficult to detect polyps, we selected, with the help of an expert gastroenterologist, 196 sessile or flat polyps that are less than ten millimeters in size. This additional data has been made available as a subset of Kvasir-SEG. Our approaches showed good results for flat or sessile and smaller polyps, which are known to be one of the major reasons for high polyp miss-rates. This is one of the significant strengths of our work and indicates that our methods should be investigated further for use in clinical practice.
Collapse
|
98
|
Du W, Rao N, Dong C, Wang Y, Hu D, Zhu L, Zeng B, Gan T. Automatic classification of esophageal disease in gastroscopic images using an efficient channel attention deep dense convolutional neural network. BIOMEDICAL OPTICS EXPRESS 2021; 12:3066-3081. [PMID: 34221645 PMCID: PMC8221966 DOI: 10.1364/boe.420935] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 04/07/2021] [Accepted: 04/25/2021] [Indexed: 02/05/2023]
Abstract
The accurate diagnosis of various esophageal diseases at different stages is crucial for providing precision therapy planning and improving 5-year survival rate of esophageal cancer patients. Automatic classification of various esophageal diseases in gastroscopic images can assist doctors to improve the diagnosis efficiency and accuracy. The existing deep learning-based classification method can only classify very few categories of esophageal diseases at the same time. Hence, we proposed a novel efficient channel attention deep dense convolutional neural network (ECA-DDCNN), which can classify the esophageal gastroscopic images into four main categories including normal esophagus (NE), precancerous esophageal diseases (PEDs), early esophageal cancer (EEC) and advanced esophageal cancer (AEC), covering six common sub-categories of esophageal diseases and one normal esophagus (seven sub-categories). In total, 20,965 gastroscopic images were collected from 4,077 patients and used to train and test our proposed method. Extensive experiments results have demonstrated convincingly that our proposed ECA-DDCNN outperforms the other state-of-art methods. The classification accuracy (Acc) of our method is 90.63% and the averaged area under curve (AUC) is 0.9877. Compared with other state-of-art methods, our method shows better performance in the classification of various esophageal disease. Particularly for these esophageal diseases with similar mucosal features, our method also achieves higher true positive (TP) rates. In conclusion, our proposed classification method has confirmed its potential ability in a wide variety of esophageal disease diagnosis.
Collapse
Affiliation(s)
- Wenju Du
- Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu 610054, China
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Nini Rao
- Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu 610054, China
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Changlong Dong
- Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu 610054, China
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Yingchun Wang
- Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu 610054, China
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Dingcan Hu
- Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu 610054, China
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Linlin Zhu
- Digestive Endoscopic Center of West China Hospital, Sichuan University, Chengdu 610017, China
| | - Bing Zeng
- School of Information and Communication Engineering, University Electronic Science and Technology of China, Chengdu 610054, China
| | - Tao Gan
- Digestive Endoscopic Center of West China Hospital, Sichuan University, Chengdu 610017, China
| |
Collapse
|
99
|
Pacal I, Karaboga D. A robust real-time deep learning based automatic polyp detection system. Comput Biol Med 2021; 134:104519. [PMID: 34090014 DOI: 10.1016/j.compbiomed.2021.104519] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 05/19/2021] [Accepted: 05/20/2021] [Indexed: 01/16/2023]
Abstract
Colorectal cancer (CRC) is globally the third most common type of cancer. Colonoscopy is considered the gold standard in colorectal cancer screening and allows for the removal of polyps before they become cancerous. Computer-aided detection systems (CADs) have been developed to detect polyps. Unfortunately, these systems have limited sensitivity and specificity. In contrast, deep learning architectures provide better detection by extracting the different properties of polyps. However, the desired success has not yet been achieved in real-time polyp detection. Here, we propose a new structure for real-time polyp detection by scaling the YOLOv4 algorithm to overcome these obstacles. For this, we first replace the whole structure with Cross Stage Partial Networks (CSPNet), then substitute the Mish activation function for the Leaky ReLu activation function and also substituted the Distance Intersection over Union (DIoU) loss for the Complete Intersection over Union (CIoU) loss. We improved performance of YOLOv3 and YOLOv4 architectures using different structures such as ResNet, VGG, DarkNet53, and Transformers. To increase success of the proposed method, we utilized a variety of data augmentation approaches for preprocessing, an ensemble learning model, and NVIDIA TensorRT for post processing. In order to compare our study with other studies more objectively, we only employed public data sets and followed MICCAI Sub-Challenge on Automatic Polyp Detection in Colonoscopy. The proposed method differs from other methods with its real-time performance and state-of-the-art detection accuracy. The proposed method (without ensemble learning) achieved higher results than those found in the literature, precision: 91.62%, recall: 82.55%, F1-score: 86.85% on public ETIS-LARIB data set and precision: 96.04%, recall: 96.68%, F1-score: 96.36% on public CVC-ColonDB data set, respectively.
Collapse
Affiliation(s)
- Ishak Pacal
- Computer Engineering Department, Engineering Faculty, Igdir University, Igdir, Turkey.
| | - Dervis Karaboga
- Computer Engineering Department, Engineering Faculty, Erciyes University, Kayseri, Turkey; Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
100
|
Kim GH, Sung ES, Nam KW. Automated laryngeal mass detection algorithm for home-based self-screening test based on convolutional neural network. Biomed Eng Online 2021; 20:51. [PMID: 34034766 PMCID: PMC8144695 DOI: 10.1186/s12938-021-00886-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 05/11/2021] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND Early detection of laryngeal masses without periodic visits to hospitals is essential for improving the possibility of full recovery and the long-term survival ratio after prompt treatment, as well as reducing the risk of clinical infection. RESULTS We first propose a convolutional neural network model for automated laryngeal mass detection based on diagnostic images captured at hospitals. Thereafter, we propose a pilot system, composed of an embedded controller, a camera module, and an LCD display, that can be utilized for a home-based self-screening test. In terms of evaluating the model's performance, the experimental results indicated a final validation loss of 0.9152 and a F1-score of 0.8371 before post-processing. Additionally, the F1-score of the original computer algorithm with respect to 100 randomly selected color-printed test images was 0.8534 after post-processing while that of the embedded pilot system was 0.7672. CONCLUSIONS The proposed technique is expected to increase the ratio of early detection of laryngeal masses without the risk of clinical infection spread, which could help improve convenience and ensure safety of individuals, patients, and medical staff.
Collapse
Affiliation(s)
- Gun Ho Kim
- Interdisciplinary Program in Biomedical Engineering, School of Medicine, Pusan National University, Busan, South Korea
| | - Eui-Suk Sung
- Department of Otolaryngology-Head and Neck Surgery, Pusan National University Yangsan Hospital, Yangsan, South Korea.
- Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Yangsan, South Korea.
| | - Kyoung Won Nam
- Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Yangsan, South Korea.
- Department of Biomedical Engineering, Pusan National University Yangsan Hospital, Yangsan, South Korea.
- Department of Biomedical Engineering, School of Medicine, Pusan National University, 49 Busandaehak-ro, Mulgeum-eup, Yangsan, Gyeongsangnam-do, 50629, South Korea.
| |
Collapse
|