151
|
Hsieh YH, Leung FW. An overview of deep learning algorithms and water exchange in colonoscopy in improving adenoma detection. Expert Rev Gastroenterol Hepatol 2019; 13:1153-1160. [PMID: 31755802 DOI: 10.1080/17474124.2019.1694903] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Accepted: 11/15/2019] [Indexed: 02/09/2023]
Abstract
Introduction: Among the Gastrointestinal (GI) Endoscopy Editorial Board top 10 topics in advances in endoscopy in 2018, water exchange colonoscopy and artificial intelligence were both considered important advances. Artificial intelligence holds the potential to increase and water exchange significantly increases adenoma detection.Areas covered: The authors searched MEDLINE (1998-2019) using the following medical subject terms: water-aided, water-assisted and water exchange colonoscopy, adenoma, artificial intelligence, deep learning, computer-assisted detection, and neural networks. Additional related studies were manually searched from the reference lists of publications. Only fully published journal articles in English were reviewed. The latest date of the search was Aug10, 2019. Artificial intelligence, machine learning, and deep learning contribute to the promise of real-time computer-aided detection diagnosis. By emphasizing near-complete suction of infused water during insertion, water exchange provides salvage cleaning and decreases cleaning-related multi-tasking distractions during withdrawal, increasing adenoma detection. The review will address how artificial intelligence and water exchange can complement each other in improving adenoma detection during colonoscopy.Expert opinion: In 5 years, research on artificial intelligence will likely achieve real-time application and evaluation of factors contributing to quality colonoscopy. Better understanding and more widespread use of water exchange will be possible.
Collapse
Affiliation(s)
- Yu-Hsi Hsieh
- Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Chiayi, Taiwan
- School of Medicine, Tzu Chi University, Hualien City, Taiwan
| | - Felix W Leung
- Sepulveda Ambulatory Care Center, Veterans Affairs Greater Los Angeles Healthcare System, North Hills, CA, USA
- David Geffen School of Medicine, at University of California at Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
152
|
Itoh H, Roth H, Oda M, Misawa M, Mori Y, Kudo SE, Mori K. Stable polyp-scene classification via subsampling and residual learning from an imbalanced large dataset. Healthc Technol Lett 2019; 6:237-242. [PMID: 32038864 PMCID: PMC6952261 DOI: 10.1049/htl.2019.0079] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2019] [Accepted: 10/02/2019] [Indexed: 01/16/2023] Open
Abstract
This Letter presents a stable polyp-scene classification method with low false positive (FP) detection. Precise automated polyp detection during colonoscopies is essential for preventing colon-cancer deaths. There is, therefore, a demand for a computer-assisted diagnosis (CAD) system for colonoscopies to assist colonoscopists. A high-performance CAD system with spatiotemporal feature extraction via a three-dimensional convolutional neural network (3D CNN) with a limited dataset achieved about 80% detection accuracy in actual colonoscopic videos. Consequently, further improvement of a 3D CNN with larger training data is feasible. However, the ratio between polyp and non-polyp scenes is quite imbalanced in a large colonoscopic video dataset. This imbalance leads to unstable polyp detection. To circumvent this, the authors propose an efficient and balanced learning technique for deep residual learning. The authors’ method randomly selects a subset of non-polyp scenes whose number is the same number of still images of polyp scenes at the beginning of each epoch of learning. Furthermore, they introduce post-processing for stable polyp-scene classification. This post-processing reduces the FPs that occur in the practical application of polyp-scene classification. They evaluate several residual networks with a large polyp-detection dataset consisting of 1027 colonoscopic videos. In the scene-level evaluation, their proposed method achieves stable polyp-scene classification with 0.86 sensitivity and 0.97 specificity.
Collapse
Affiliation(s)
- Hayato Itoh
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
| | - Holger Roth
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
| | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Tsuzuki-ku, Yokohama, 224-8503, Japan
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Tsuzuki-ku, Yokohama, 224-8503, Japan
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Tsuzuki-ku, Yokohama, 224-8503, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan.,Information Technology Center, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan.,Research Center for Medical Bigdata, National Institute of Informatics, Hitotsubashi 2-1-2, Chiyoda-ku, Tokyo, 101-8430, Japan
| |
Collapse
|
153
|
Wickstrøm K, Kampffmeyer M, Jenssen R. Uncertainty and interpretability in convolutional neural networks for semantic segmentation of colorectal polyps. Med Image Anal 2019; 60:101619. [PMID: 31810005 DOI: 10.1016/j.media.2019.101619] [Citation(s) in RCA: 61] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 11/14/2019] [Accepted: 11/14/2019] [Indexed: 12/27/2022]
Abstract
Colorectal polyps are known to be potential precursors to colorectal cancer, which is one of the leading causes of cancer-related deaths on a global scale. Early detection and prevention of colorectal cancer is primarily enabled through manual screenings, where the intestines of a patient is visually examined. Such a procedure can be challenging and exhausting for the person performing the screening. This has resulted in numerous studies on designing automatic systems aimed at supporting physicians during the examination. Recently, such automatic systems have seen a significant improvement as a result of an increasing amount of publicly available colorectal imagery and advances in deep learning research for object image recognition. Specifically, decision support systems based on Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance on both detection and segmentation of colorectal polyps. However, CNN-based models need to not only be precise in order to be helpful in a medical context. In addition, interpretability and uncertainty in predictions must be well understood. In this paper, we develop and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. Furthermore, we propose a novel method for estimating the uncertainty associated with important features in the input and demonstrate how interpretability and uncertainty can be modeled in DSSs for semantic segmentation of colorectal polyps. Results indicate that deep models are utilizing the shape and edge information of polyps to make their prediction. Moreover, inaccurate predictions show a higher degree of uncertainty compared to precise predictions.
Collapse
Affiliation(s)
- Kristoffer Wickstrøm
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø NO-9037, Norway.
| | - Michael Kampffmeyer
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø NO-9037, Norway
| | - Robert Jenssen
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø NO-9037, Norway
| |
Collapse
|
154
|
Yamada M, Saito Y, Imaoka H, Saiko M, Yamada S, Kondo H, Takamaru H, Sakamoto T, Sese J, Kuchiba A, Shibata T, Hamamoto R. Development of a real-time endoscopic image diagnosis support system using deep learning technology in colonoscopy. Sci Rep 2019; 9:14465. [PMID: 31594962 PMCID: PMC6783454 DOI: 10.1038/s41598-019-50567-5] [Citation(s) in RCA: 135] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Accepted: 09/04/2019] [Indexed: 12/11/2022] Open
Abstract
Gaps in colonoscopy skills among endoscopists, primarily due to experience, have been identified, and solutions are critically needed. Hence, the development of a real-time robust detection system for colorectal neoplasms is considered to significantly reduce the risk of missed lesions during colonoscopy. Here, we develop an artificial intelligence (AI) system that automatically detects early signs of colorectal cancer during colonoscopy; the AI system shows the sensitivity and specificity are 97.3% (95% confidence interval [CI] = 95.9%–98.4%) and 99.0% (95% CI = 98.6%–99.2%), respectively, and the area under the curve is 0.975 (95% CI = 0.964–0.986) in the validation set. Moreover, the sensitivities are 98.0% (95% CI = 96.6%–98.8%) in the polypoid subgroup and 93.7% (95% CI = 87.6%–96.9%) in the non-polypoid subgroup; To accelerate the detection, tensor metrics in the trained model was decomposed, and the system can predict cancerous regions 21.9 ms/image on average. These findings suggest that the system is sufficient to support endoscopists in the high detection against non-polypoid lesions, which are frequently missed by optical colonoscopy. This AI system can alert endoscopists in real-time to avoid missing abnormalities such as non-polypoid polyps during colonoscopy, improving the early detection of this disease.
Collapse
Affiliation(s)
- Masayoshi Yamada
- Endoscopy Division, National Cancer Center Hospital, Tokyo, Japan. .,Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, Tokyo, Japan.
| | - Yutaka Saito
- Endoscopy Division, National Cancer Center Hospital, Tokyo, Japan
| | - Hitoshi Imaoka
- Biometrics Research Laboratories, NEC Corporation, Kanagawa, Japan
| | - Masahiro Saiko
- Biometrics Research Laboratories, NEC Corporation, Kanagawa, Japan
| | - Shigemi Yamada
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, Tokyo, Japan.,Advanced Intelligence Project Center, RIKEN, Tokyo, Japan
| | - Hiroko Kondo
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, Tokyo, Japan.,Advanced Intelligence Project Center, RIKEN, Tokyo, Japan
| | | | - Taku Sakamoto
- Endoscopy Division, National Cancer Center Hospital, Tokyo, Japan
| | - Jun Sese
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Tokyo, Japan
| | - Aya Kuchiba
- Biostatistics Division, National Cancer Center, Tokyo, Japan
| | - Taro Shibata
- Biostatistics Division, National Cancer Center, Tokyo, Japan
| | - Ryuji Hamamoto
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, Tokyo, Japan.,Advanced Intelligence Project Center, RIKEN, Tokyo, Japan
| |
Collapse
|
155
|
Wang P, Berzin TM, Glissen Brown JR, Bharadwaj S, Becq A, Xiao X, Liu P, Li L, Song Y, Zhang D, Li Y, Xu G, Tu M, Liu X. Real-time automatic detection system increases colonoscopic polyp and adenoma detection rates: a prospective randomised controlled study. Gut 2019; 68:1813-1819. [PMID: 30814121 PMCID: PMC6839720 DOI: 10.1136/gutjnl-2018-317500] [Citation(s) in RCA: 520] [Impact Index Per Article: 86.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2018] [Revised: 02/04/2019] [Accepted: 02/13/2019] [Indexed: 12/12/2022]
Abstract
OBJECTIVE The effect of colonoscopy on colorectal cancer mortality is limited by several factors, among them a certain miss rate, leading to limited adenoma detection rates (ADRs). We investigated the effect of an automatic polyp detection system based on deep learning on polyp detection rate and ADR. DESIGN In an open, non-blinded trial, consecutive patients were prospectively randomised to undergo diagnostic colonoscopy with or without assistance of a real-time automatic polyp detection system providing a simultaneous visual notice and sound alarm on polyp detection. The primary outcome was ADR. RESULTS Of 1058 patients included, 536 were randomised to standard colonoscopy, and 522 were randomised to colonoscopy with computer-aided diagnosis. The artificial intelligence (AI) system significantly increased ADR (29.1%vs20.3%, p<0.001) and the mean number of adenomas per patient (0.53vs0.31, p<0.001). This was due to a higher number of diminutive adenomas found (185vs102; p<0.001), while there was no statistical difference in larger adenomas (77vs58, p=0.075). In addition, the number of hyperplastic polyps was also significantly increased (114vs52, p<0.001). CONCLUSIONS In a low prevalent ADR population, an automatic polyp detection system during colonoscopy resulted in a significant increase in the number of diminutive adenomas detected, as well as an increase in the rate of hyperplastic polyps. The cost-benefit ratio of such effects has to be determined further. TRIAL REGISTRATION NUMBER ChiCTR-DDD-17012221; Results.
Collapse
Affiliation(s)
- Pu Wang
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Tyler M Berzin
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Jeremy Romek Glissen Brown
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Shishira Bharadwaj
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Aymeric Becq
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Xun Xiao
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Peixi Liu
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Liangping Li
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Yan Song
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Di Zhang
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Yi Li
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Guangre Xu
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Mengtian Tu
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Xiaogang Liu
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| |
Collapse
|
156
|
Region-Based Automated Localization of Colonoscopy and Wireless Capsule Endoscopy Polyps. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9122404] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
The early detection of polyps could help prevent colorectal cancer. The automated detection of polyps on the colon walls could reduce the number of false negatives that occur due to manual examination errors or polyps being hidden behind folds, and could also help doctors locate polyps from screening tests such as colonoscopy and wireless capsule endoscopy. Losing polyps may result in lesions evolving badly. In this paper, we propose a modified region-based convolutional neural network (R-CNN) by generating masks around polyps detected from still frames. The locations of the polyps in the image are marked, which assists the doctors examining the polyps. The features from the polyp images are extracted using pre-trained Resnet-50 and Resnet-101 models through feature extraction and fine-tuning techniques. Various publicly available polyp datasets are analyzed with various pertained weights. It is interesting to notice that fine-tuning with balloon data (polyp-like natural images) improved the polyp detection rate. The optimum CNN models on colonoscopy datasets including CVC-ColonDB, CVC-PolypHD, and ETIS-Larib produced values (F1 score, F2 score) of (90.73, 91.27), (80.65, 79.11), and (76.43, 78.70) respectively. The best model on the wireless capsule endoscopy dataset gave a performance of (96.67, 96.10). The experimental results indicate the better localization of polyps compared to recent traditional and deep learning methods.
Collapse
|
157
|
Liu M, Jiang J, Wang Z. Colonic Polyp Detection in Endoscopic Videos With Single Shot Detection Based Deep Convolutional Neural Network. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2019; 7:75058-75066. [PMID: 33604228 PMCID: PMC7889061 DOI: 10.1109/access.2019.2921027] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
A major rise in the prevalence and influence of colorectal cancer (CRC) leads to substantially increasing healthcare costs and even death. It is widely accepted that early detection and removal of colonic polyps can prevent CRC. Detection of colonic polyps in colonoscopy videos is problematic because of complex environment of colon and various shapes of polyps. Currently, researchers indicate feasibility of Convolutional Neural Network (CNN)-based detection of polyps but better feature extractors are needed to improve detection performance. In this paper, we investigated the potential of the single shot detector (SSD) framework for detecting polyps in colonoscopy videos. SSD is a one-stage method, which uses a feed-forward CNN to produce a collection of fixed-size bounding boxes for each object from different feature maps. Three different feature extractors, including ResNet50, VGG16, and InceptionV3 were assessed. Multi-scale feature maps integrated into SSD were designed for ResNet50 and InceptionV3, respectively. We validated this method on the 2015 MICCAI polyp detection challenge datasets, compared it with teams attended the challenge, YOLOV3 and two-stage method, Faster-RCNN. Our results demonstrated that the proposed method surpassed all the teams in MICCAI challenge and YOLOV3 and was comparable with two-stage method. Especially in detection speed aspect, our proposed method outperformed all the methods, met real-time application requirement. Meanwhile, we also indicated that among all the feature extractors, InceptionV3 obtained the best result of precision and recall. In conclusion, SSD- based method achieved excellent detection performance in polyp detection and can potentially improve diagnostic accuracy and efficiency.
Collapse
Affiliation(s)
- Ming Liu
- Hunan Key Laboratory of Nonferrous Resources and Geological Hazard Exploration, Changsha 410083, China
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Zenan Wang
- Department of Gastroenterology, Beijing Chaoyang Hospital, the Third Clinical Medical College of Capital Medical University, Beijing 100020, China
| |
Collapse
|
158
|
Rau A, Edwards PJE, Ahmad OF, Riordan P, Janatka M, Lovat LB, Stoyanov D. Implicit domain adaptation with conditional generative adversarial networks for depth prediction in endoscopy. Int J Comput Assist Radiol Surg 2019; 14:1167-1176. [PMID: 30989505 PMCID: PMC6570710 DOI: 10.1007/s11548-019-01962-w] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Accepted: 04/02/2019] [Indexed: 02/08/2023]
Abstract
PURPOSE Colorectal cancer is the third most common cancer worldwide, and early therapeutic treatment of precancerous tissue during colonoscopy is crucial for better prognosis and can be curative. Navigation within the colon and comprehensive inspection of the endoluminal tissue are key to successful colonoscopy but can vary with the skill and experience of the endoscopist. Computer-assisted interventions in colonoscopy can provide better support tools for mapping the colon to ensure complete examination and for automatically detecting abnormal tissue regions. METHODS We train the conditional generative adversarial network pix2pix, to transform monocular endoscopic images to depth, which can be a building block in a navigational pipeline or be used to measure the size of polyps during colonoscopy. To overcome the lack of labelled training data in endoscopy, we propose to use simulation environments and to additionally train the generator and discriminator of the model on unlabelled real video frames in order to adapt to real colonoscopy environments. RESULTS We report promising results on synthetic, phantom and real datasets and show that generative models outperform discriminative models when predicting depth from colonoscopy images, in terms of both accuracy and robustness towards changes in domains. CONCLUSIONS Training the discriminator and generator of the model on real images, we show that our model performs implicit domain adaptation, which is a key step towards bridging the gap between synthetic and real data. Importantly, we demonstrate the feasibility of training a single model to predict depth from both synthetic and real images without the need for explicit, unsupervised transformer networks mapping between the domains of synthetic and real data.
Collapse
Affiliation(s)
- Anita Rau
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK.
| | - P J Eddie Edwards
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
| | - Omer F Ahmad
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
| | | | - Mirek Janatka
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
| |
Collapse
|
159
|
Qadir HA, Balasingham I, Solhusvik J, Bergsland J, Aabakken L, Shin Y. Improving Automatic Polyp Detection Using CNN by Exploiting Temporal Dependency in Colonoscopy Video. IEEE J Biomed Health Inform 2019; 24:180-193. [PMID: 30946683 DOI: 10.1109/jbhi.2019.2907434] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Automatic polyp detection has been shown to be difficult due to various polyp-like structures in the colon and high interclass variations in polyp size, color, shape, and texture. An efficient method should not only have a high correct detection rate (high sensitivity) but also a low false detection rate (high precision and specificity). The state-of-the-art detection methods include convolutional neural networks (CNN). However, CNNs have shown to be vulnerable to small perturbations and noise; they sometimes miss the same polyp appearing in neighboring frames and produce a high number of false positives. We aim to tackle this problem and improve the overall performance of the CNN-based object detectors for polyp detection in colonoscopy videos. Our method consists of two stages: a region of interest (RoI) proposal by CNN-based object detector networks and a false positive (FP) reduction unit. The FP reduction unit exploits the temporal dependencies among image frames in video by integrating the bidirectional temporal information obtained by RoIs in a set of consecutive frames. This information is used to make the final decision. The experimental results show that the bidirectional temporal information has been helpful in estimating polyp positions and accurately predict the FPs. This provides an overall performance improvement in terms of sensitivity, precision, and specificity compared to conventional false positive learning method, and thus achieves the state-of-the-art results on the CVC-ClinicVideoDB video data set.
Collapse
|
160
|
Zhang X, Chen F, Yu T, An J, Huang Z, Liu J, Hu W, Wang L, Duan H, Si J. Real-time gastric polyp detection using convolutional neural networks. PLoS One 2019; 14:e0214133. [PMID: 30908513 PMCID: PMC6433439 DOI: 10.1371/journal.pone.0214133] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Accepted: 03/01/2019] [Indexed: 02/07/2023] Open
Abstract
Computer-aided polyp detection in gastric gastroscopy has been the subject of research over the past few decades. However, despite significant advances, automatic polyp detection in real time is still an unsolved problem. In this paper, we report on a convolutional neural network (CNN) for polyp detection that is constructed based on Single Shot MultiBox Detector (SSD) architecture and which we call SSD for Gastric Polyps (SSD-GPNet). To take full advantages of feature maps' information from the feature pyramid and to acquire higher accuracy, we re-use information that is abandoned by Max-Pooling layers. In other words, we reuse the lost data from the pooling layers and concatenate that data as extra feature maps to contribute to classification and detection. Meanwhile, in the feature pyramid, we concatenate feature maps of the lower layers and feature maps that are deconvolved from upper layers to make explicit relationships between layers and to effectively increase the number of channels. The results show that our enhanced SSD for gastric polyp detection can realize real-time polyp detection with 50 frames per second (FPS) and can improve the mean average precision (mAP) from 88.5% to 90.4%, with only a little loss in time-performance. And the further experiment shows that SSD-GPNet has excellent performance in improving polyp detection recalls over 10% (p = 0.00053), especially in small polyp detection. This can help endoscopic physicians more easily find missed polyps and decrease the gastric polyp miss rate. It may be applicable in daily clinical practice to reduce the burden on physicians.
Collapse
Affiliation(s)
- Xu Zhang
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Fei Chen
- Institute of Gastroenterology, Zhejiang University, Hangzhou, China
- Department of Gastroenterology, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Tao Yu
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Jiye An
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Zhengxing Huang
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Jiquan Liu
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Weiling Hu
- Institute of Gastroenterology, Zhejiang University, Hangzhou, China
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Medical School, Zhejiang University, Hangzhou, China
| | - Liangjing Wang
- Institute of Gastroenterology, Zhejiang University, Hangzhou, China
- Department of Gastroenterology, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Huilong Duan
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Jianmin Si
- Institute of Gastroenterology, Zhejiang University, Hangzhou, China
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Medical School, Zhejiang University, Hangzhou, China
| |
Collapse
|
161
|
On Structural Entropy and Spatial Filling Factor Analysis of Colonoscopy Pictures. ENTROPY 2019; 21:e21030256. [PMID: 33266971 PMCID: PMC7514738 DOI: 10.3390/e21030256] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2018] [Revised: 02/19/2019] [Accepted: 02/27/2019] [Indexed: 12/21/2022]
Abstract
Colonoscopy is the standard device for diagnosing colorectal cancer, which develops from little lesions on the bowel wall called polyps. The Rényi entropies-based structural entropy and spatial filling factor are two scale- and resolution-independent quantities that characterize the shape of a probability distribution with the help of characteristic curves of the structural entropy–spatial filling factor map. This alternative definition of structural entropy is easy to calculate, independent of the image resolution, and does not require the calculation of neighbor statistics, unlike the other graph-based structural entropies.The distant goal of this study was to help computer aided diagnosis in finding colorectal polyps by making the Rényi entropy based structural entropy more understood. The direct goal was to determine characteristic curves that can differentiate between polyps and other structure on the picture. After analyzing the distribution of colonoscopy picture color channels, the typical structures were modeled with simple geometrical functions and the structural entropy–spatial filling factor characteristic curves were determined for these model structures for various parameter sets. A colonoscopy image analying method, i.e., the line- or column-wise scanning of the picture, was also tested, with satisfactory matching of the characteristic curve and the image.
Collapse
|
162
|
Maier-Hein L, Eisenmann M, Reinke A, Onogur S, Stankovic M, Scholz P, Arbel T, Bogunovic H, Bradley AP, Carass A, Feldmann C, Frangi AF, Full PM, van Ginneken B, Hanbury A, Honauer K, Kozubek M, Landman BA, März K, Maier O, Maier-Hein K, Menze BH, Müller H, Neher PF, Niessen W, Rajpoot N, Sharp GC, Sirinukunwattana K, Speidel S, Stock C, Stoyanov D, Taha AA, van der Sommen F, Wang CW, Weber MA, Zheng G, Jannin P, Kopp-Schneider A. Why rankings of biomedical image analysis competitions should be interpreted with care. Nat Commun 2018; 9:5217. [PMID: 30523263 PMCID: PMC6284017 DOI: 10.1038/s41467-018-07619-7] [Citation(s) in RCA: 166] [Impact Index Per Article: 23.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2018] [Accepted: 11/07/2018] [Indexed: 11/08/2022] Open
Abstract
International challenges have become the standard for validation of biomedical image analysis methods. Given their scientific impact, it is surprising that a critical analysis of common practices related to the organization of challenges has not yet been performed. In this paper, we present a comprehensive analysis of biomedical image analysis challenges conducted up to now. We demonstrate the importance of challenges and show that the lack of quality control has critical consequences. First, reproducibility and interpretation of the results is often hampered as only a fraction of relevant information is typically provided. Second, the rank of an algorithm is generally not robust to a number of variables such as the test data used for validation, the ranking scheme applied and the observers that make the reference annotations. To overcome these problems, we recommend best practice guidelines and define open research questions to be addressed in the future.
Collapse
Affiliation(s)
- Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany.
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Annika Reinke
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Sinan Onogur
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Marko Stankovic
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Patrick Scholz
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Tal Arbel
- Centre for Intelligent Machines, McGill University, Montreal, QC, H3A0G4, Canada
| | - Hrvoje Bogunovic
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology, Medical University Vienna, 1090, Vienna, Austria
| | - Andrew P Bradley
- Science and Engineering Faculty, Queensland University of Technology, Brisbane, QLD, 4001, Australia
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Department of Computer Science, Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Carolin Feldmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Alejandro F Frangi
- CISTIB - Center for Computational Imaging & Simulation Technologies in Biomedicine, The University of Leeds, Leeds, Yorkshire, LS2 9JT, UK
| | - Peter M Full
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Bram van Ginneken
- Department of Radiology and Nuclear Medicine, Medical Image Analysis, Radboud University Center, 6525 GA, Nijmegen, The Netherlands
| | - Allan Hanbury
- Institute of Information Systems Engineering, TU Wien, 1040, Vienna, Austria
- Complexity Science Hub Vienna, 1080, Vienna, Austria
| | - Katrin Honauer
- Heidelberg Collaboratory for Image Processing (HCI), Heidelberg University, 69120, Heidelberg, Germany
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Masaryk University, 60200, Brno, Czech Republic
| | - Bennett A Landman
- Electrical Engineering, Vanderbilt University, Nashville, TN, 37235-1679, USA
| | - Keno März
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Oskar Maier
- Institute of Medical Informatics, Universität zu Lübeck, 23562, Lübeck, Germany
| | - Klaus Maier-Hein
- Division of Medical Image Computing (MIC), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Bjoern H Menze
- Institute for Advanced Studies, Department of Informatics, Technical University of Munich, 80333, Munich, Germany
| | - Henning Müller
- Information System Institute, HES-SO, Sierre, 3960, Switzerland
| | - Peter F Neher
- Division of Medical Image Computing (MIC), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Wiro Niessen
- Departments of Radiology, Nuclear Medicine and Medical Informatics, Erasmus MC, 3015 GD, Rotterdam, The Netherlands
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK
| | - Gregory C Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, 02114, USA
| | | | - Stefanie Speidel
- Division of Translational Surgical Oncology (TCO), National Center for Tumor Diseases Dresden, 01307, Dresden, Germany
| | - Christian Stock
- Division of Clinical Epidemiology and Aging Research, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Danail Stoyanov
- Centre for Medical Image Computing (CMIC) & Department of Computer Science, University College London, London, W1W 7TS, UK
| | - Abdel Aziz Taha
- Data Science Studio, Research Studios Austria FG, 1090, Vienna, Austria
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven, The Netherlands
| | - Ching-Wei Wang
- AIExplore, NTUST Center of Computer Vision and Medical Imaging, Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, 106, Taiwan
| | - Marc-André Weber
- Institute of Diagnostic and Interventional Radiology, University Medical Center Rostock, 18051, Rostock, Germany
| | - Guoyan Zheng
- Institute for Surgical Technology and Biomechanics, University of Bern, Bern, 3014, Switzerland
| | - Pierre Jannin
- Univ Rennes, Inserm, LTSI (Laboratoire Traitement du Signal et de l'Image) - UMR_S 1099, Rennes, 35043, Cedex, France
| | - Annette Kopp-Schneider
- Division of Biostatistics, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| |
Collapse
|
163
|
Ahmad OF, Soares AS, Mazomenos E, Brandao P, Vega R, Seward E, Stoyanov D, Chand M, Lovat LB. Artificial intelligence and computer-aided diagnosis in colonoscopy: current evidence and future directions. Lancet Gastroenterol Hepatol 2018; 4:71-80. [PMID: 30527583 DOI: 10.1016/s2468-1253(18)30282-6] [Citation(s) in RCA: 122] [Impact Index Per Article: 17.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Revised: 08/10/2018] [Accepted: 08/20/2018] [Indexed: 12/15/2022]
Abstract
Computer-aided diagnosis offers a promising solution to reduce variation in colonoscopy performance. Pooled miss rates for polyps are as high as 22%, and associated interval colorectal cancers after colonoscopy are of concern. Optical biopsy, whereby in-vivo classification of polyps based on enhanced imaging replaces histopathology, has not been incorporated into routine practice because it is limited by interobserver variability and generally only meets accepted standards in expert settings. Real-time decision-support software has been developed to detect and characterise polyps, and also to offer feedback on the technical quality of inspection. Some of the current algorithms, particularly with recent advances in artificial intelligence techniques, match human expert performance for optical biopsy. In this Review, we summarise the evidence for clinical applications of computer-aided diagnosis and artificial intelligence in colonoscopy.
Collapse
Affiliation(s)
- Omer F Ahmad
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK; Gastrointestinal Services, University College London Hospital, London, UK.
| | - Antonio S Soares
- Division of Surgery & Interventional Science, University College London, London, UK
| | - Evangelos Mazomenos
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK
| | - Patrick Brandao
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK
| | - Roser Vega
- Gastrointestinal Services, University College London Hospital, London, UK
| | - Edward Seward
- Gastrointestinal Services, University College London Hospital, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK
| | - Manish Chand
- Division of Surgery & Interventional Science, University College London, London, UK; Gastrointestinal Services, University College London Hospital, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK; Division of Surgery & Interventional Science, University College London, London, UK; Gastrointestinal Services, University College London Hospital, London, UK
| |
Collapse
|
164
|
Mahmood F, Chen R, Durr NJ. Unsupervised Reverse Domain Adaptation for Synthetic Medical Images via Adversarial Training. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2572-2581. [PMID: 29993538 DOI: 10.1109/tmi.2018.2842767] [Citation(s) in RCA: 107] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
To realize the full potential of deep learning for medical imaging, large annotated datasets are required for training. Such datasets are difficult to acquire due to privacy issues, lack of experts available for annotation, underrepresentation of rare conditions, and poor standardization. The lack of annotated data has been addressed in conventional vision applications using synthetic images refined via unsupervised adversarial training to look like real images. However, this approach is difficult to extend to general medical imaging because of the complex and diverse set of features found in real human tissues. We propose a novel framework that uses a reverse flow, where adversarial training is used to make real medical images more like synthetic images, and clinically-relevant features are preserved via self-regularization. These domain-adapted synthetic-like images can then be accurately interpreted by networks trained on large datasets of synthetic medical images. We implement this approach on the notoriously difficult task of depth-estimation from monocular endoscopy which has a variety of applications in colonoscopy, robotic surgery, and invasive endoscopic procedures. We train a depth estimator on a large data set of synthetic images generated using an accurate forward model of an endoscope and an anatomically-realistic colon. Our analysis demonstrates that the structural similarity of endoscopy depth estimation in a real pig colon predicted from a network trained solely on synthetic data improved by 78.7% by using reverse domain adaptation.
Collapse
|
165
|
Al Hajj H, Lamard M, Conze PH, Roychowdhury S, Hu X, Maršalkaitė G, Zisimopoulos O, Dedmari MA, Zhao F, Prellberg J, Sahu M, Galdran A, Araújo T, Vo DM, Panda C, Dahiya N, Kondo S, Bian Z, Vahdat A, Bialopetravičius J, Flouty E, Qiu C, Dill S, Mukhopadhyay A, Costa P, Aresta G, Ramamurthy S, Lee SW, Campilho A, Zachow S, Xia S, Conjeti S, Stoyanov D, Armaitis J, Heng PA, Macready WG, Cochener B, Quellec G. CATARACTS: Challenge on automatic tool annotation for cataRACT surgery. Med Image Anal 2018; 52:24-41. [PMID: 30468970 DOI: 10.1016/j.media.2018.11.008] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2018] [Revised: 11/13/2018] [Accepted: 11/15/2018] [Indexed: 12/29/2022]
Abstract
Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.
Collapse
Affiliation(s)
| | - Mathieu Lamard
- Inserm, UMR 1101, Brest, F-29200, France; Univ Bretagne Occidentale, Brest, F-29200, France
| | - Pierre-Henri Conze
- Inserm, UMR 1101, Brest, F-29200, France; IMT Atlantique, LaTIM UMR 1101, UBL, Brest, F-29200, France
| | | | - Xiaowei Hu
- Dept. of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | | | | | - Muneer Ahmad Dedmari
- Chair for Computer Aided Medical Procedures, Faculty of Informatics, Technical University of Munich, Garching b. Munich, 85748, Germany
| | - Fenqiang Zhao
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, HangZhou, 310000, China
| | - Jonas Prellberg
- Dept. of Informatics, Carl von Ossietzky University, Oldenburg, 26129, Germany
| | - Manish Sahu
- Department of Visual Data Analysis, Zuse Institute Berlin, Berlin, 14195, Germany
| | - Adrian Galdran
- INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Teresa Araújo
- Faculdade de Engenharia, Universidade do Porto, Porto, 4200-465, Portugal; INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Duc My Vo
- Gachon University, 1342 Seongnamdaero, Sujeonggu, Seongnam, 13120, Korea
| | | | - Navdeep Dahiya
- Laboratory of Computational Computer Vision, Georgia Tech, Atlanta, GA, 30332, USA
| | | | | | - Arash Vahdat
- D-Wave Systems Inc., Burnaby, BC, V5G 4M9, Canada
| | | | | | - Chenhui Qiu
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, HangZhou, 310000, China
| | - Sabrina Dill
- Department of Visual Data Analysis, Zuse Institute Berlin, Berlin, 14195, Germany
| | - Anirban Mukhopadhyay
- Department of Computer Science, Technische Universität Darmstadt, Darmstadt, 64283, Germany
| | - Pedro Costa
- INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Guilherme Aresta
- Faculdade de Engenharia, Universidade do Porto, Porto, 4200-465, Portugal; INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Senthil Ramamurthy
- Laboratory of Computational Computer Vision, Georgia Tech, Atlanta, GA, 30332, USA
| | - Sang-Woong Lee
- Gachon University, 1342 Seongnamdaero, Sujeonggu, Seongnam, 13120, Korea
| | - Aurélio Campilho
- Faculdade de Engenharia, Universidade do Porto, Porto, 4200-465, Portugal; INESC TEC - Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência, Porto, 4200-465, Portugal
| | - Stefan Zachow
- Department of Visual Data Analysis, Zuse Institute Berlin, Berlin, 14195, Germany
| | - Shunren Xia
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, HangZhou, 310000, China
| | - Sailesh Conjeti
- Chair for Computer Aided Medical Procedures, Faculty of Informatics, Technical University of Munich, Garching b. Munich, 85748, Germany; German Center for Neurodegenrative Diseases (DZNE), Bonn, 53127, Germany
| | - Danail Stoyanov
- Digital Surgery Ltd, EC1V 2QY, London, UK; University College London, Gower Street, WC1E 6BT, London, UK
| | | | - Pheng-Ann Heng
- Dept. of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | | | - Béatrice Cochener
- Inserm, UMR 1101, Brest, F-29200, France; Univ Bretagne Occidentale, Brest, F-29200, France; Service d'Ophtalmologie, CHRU Brest, Brest, F-29200, France
| | | |
Collapse
|
166
|
Zhang R, Zheng Y, Poon CC, Shen D, Lau JY. Polyp detection during colonoscopy using a regression-based convolutional neural network with a tracker. PATTERN RECOGNITION 2018; 83:209-219. [PMID: 31105338 PMCID: PMC6519928 DOI: 10.1016/j.patcog.2018.05.026] [Citation(s) in RCA: 63] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
A computer-aided detection (CAD) tool for locating and detecting polyps can help reduce the chance of missing polyps during colonoscopy. Nevertheless, state-of-the-art algorithms were either computationally complex or suffered from low sensitivity and therefore unsuitable to be used in real clinical setting. In this paper, a novel regression-based Convolutional Neural Network (CNN) pipeline is presented for polyp detection during colonoscopy. The proposed pipeline was constructed in two parts: 1) to learn the spatial features of colorectal polyps, a fast object detection algorithm named ResYOLO was pre-trained with a large non-medical image database and further fine-tuned with colonoscopic images extracted from videos; and 2) temporal information was incorporated via a tracker named Efficient Convolution Operators (ECO) for refining the detection results given by ResYOLO. Evaluated on 17,574 frames extracted from 18 endoscopic videos of the AsuMayoDB, the proposed method was able to detect frames with polyps with a precision of 88.6%, recall of 71.6% and processing speed of 6.5 frames per second, i.e. the method can accurately locate polyps in more frames and at a faster speed compared to existing methods. In conclusion, the proposed method has great potential to be used to assist endoscopists in tracking polyps during colonoscopy.
Collapse
Affiliation(s)
- Ruikai Zhang
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
| | - Yali Zheng
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
| | - Carmen C.Y. Poon
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
- Corresponding author
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Corresponding author at: Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA.
| | - James Y.W. Lau
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
167
|
Development and validation of a deep-learning algorithm for the detection of polyps during colonoscopy. Nat Biomed Eng 2018; 2:741-748. [PMID: 31015647 DOI: 10.1038/s41551-018-0301-3] [Citation(s) in RCA: 260] [Impact Index Per Article: 37.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Accepted: 08/29/2018] [Indexed: 02/08/2023]
Abstract
The detection and removal of precancerous polyps via colonoscopy is the gold standard for the prevention of colon cancer. However, the detection rate of adenomatous polyps can vary significantly among endoscopists. Here, we show that a machine-learning algorithm can detect polyps in clinical colonoscopies, in real time and with high sensitivity and specificity. We developed the deep-learning algorithm by using data from 1,290 patients, and validated it on newly collected 27,113 colonoscopy images from 1,138 patients with at least one detected polyp (per-image-sensitivity, 94.38%; per-image-specificity, 95.92%; area under the receiver operating characteristic curve, 0.984), on a public database of 612 polyp-containing images (per-image-sensitivity, 88.24%), on 138 colonoscopy videos with histologically confirmed polyps (per-image-sensitivity of 91.64%; per-polyp-sensitivity, 100%), and on 54 unaltered full-range colonoscopy videos without polyps (per-image-specificity, 95.40%). By using a multi-threaded processing system, the algorithm can process at least 25 frames per second with a latency of 76.80 ± 5.60 ms in real-time video analysis. The software may aid endoscopists while performing colonoscopies, and help assess differences in polyp and adenoma detection performance among endoscopists.
Collapse
|
168
|
Iakovidis DK, Georgakopoulos SV, Vasilakakis M, Koulaouzidis A, Plagianakos VP. Detecting and Locating Gastrointestinal Anomalies Using Deep Learning and Iterative Cluster Unification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2196-2210. [PMID: 29994763 DOI: 10.1109/tmi.2018.2837002] [Citation(s) in RCA: 81] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
This paper proposes a novel methodology for automatic detection and localization of gastrointestinal (GI) anomalies in endoscopic video frame sequences. Training is performed with weakly annotated images, using only image-level, semantic labels instead of detailed, and pixel-level annotations. This makes it a cost-effective approach for the analysis of large videoendoscopy repositories. Other advantages of the proposed methodology include its capability to suggest possible locations of GI anomalies within the video frames, and its generality, in the sense that abnormal frame detection is based on automatically derived image features. It is implemented in three phases: 1) it classifies the video frames into abnormal or normal using a weakly supervised convolutional neural network (WCNN) architecture; 2) detects salient points from deeper WCNN layers, using a deep saliency detection algorithm; and 3) localizes GI anomalies using an iterative cluster unification (ICU) algorithm. ICU is based on a pointwise cross-feature-map (PCFM) descriptor extracted locally from the detected salient points using information derived from the WCNN. Results, from extensive experimentation using publicly available collections of gastrointestinal endoscopy video frames, are presented. The data sets used include a variety of GI anomalies. Both anomaly detection and localization performance achieved, in terms of the area under receiver operating characteristic (AUC), were >80%. The highest AUC for anomaly detection was obtained on conventional gastroscopy images, reaching 96%, and the highest AUC for anomaly localization was obtained on wireless capsule endoscopy images, reaching 88%.
Collapse
|
169
|
Urban G, Tripathi P, Alkayali T, Mittal M, Jalali F, Karnes W, Baldi P. Deep Learning Localizes and Identifies Polyps in Real Time With 96% Accuracy in Screening Colonoscopy. Gastroenterology 2018; 155:1069-1078.e8. [PMID: 29928897 PMCID: PMC6174102 DOI: 10.1053/j.gastro.2018.06.037] [Citation(s) in RCA: 417] [Impact Index Per Article: 59.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2017] [Revised: 04/30/2018] [Accepted: 06/11/2018] [Indexed: 02/07/2023]
Abstract
BACKGROUND & AIMS The benefit of colonoscopy for colorectal cancer prevention depends on the adenoma detection rate (ADR). The ADR should reflect the adenoma prevalence rate, which is estimated to be higher than 50% in the screening-age population. However, the ADR by colonoscopists varies from 7% to 53%. It is estimated that every 1% increase in ADR lowers the risk of interval colorectal cancers by 3%-6%. New strategies are needed to increase the ADR during colonoscopy. We tested the ability of computer-assisted image analysis using convolutional neural networks (CNNs; a deep learning model for image analysis) to improve polyp detection, a surrogate of ADR. METHODS We designed and trained deep CNNs to detect polyps using a diverse and representative set of 8,641 hand-labeled images from screening colonoscopies collected from more than 2000 patients. We tested the models on 20 colonoscopy videos with a total duration of 5 hours. Expert colonoscopists were asked to identify all polyps in 9 de-identified colonoscopy videos, which were selected from archived video studies, with or without benefit of the CNN overlay. Their findings were compared with those of the CNN using CNN-assisted expert review as the reference. RESULTS When tested on manually labeled images, the CNN identified polyps with an area under the receiver operating characteristic curve of 0.991 and an accuracy of 96.4%. In the analysis of colonoscopy videos in which 28 polyps were removed, 4 expert reviewers identified 8 additional polyps without CNN assistance that had not been removed and identified an additional 17 polyps with CNN assistance (45 in total). All polyps removed and identified by expert review were detected by the CNN. The CNN had a false-positive rate of 7%. CONCLUSION In a set of 8,641 colonoscopy images containing 4,088 unique polyps, the CNN identified polyps with a cross-validation accuracy of 96.4% and an area under the receiver operating characteristic curve of 0.991. The CNN system detected and localized polyps well within real-time constraints using an ordinary desktop machine with a contemporary graphics processing unit. This system could increase the ADR and decrease interval colorectal cancers but requires validation in large multicenter trials.
Collapse
Affiliation(s)
- Gregor Urban
- Department of Computer Science, University of California, Irvine, CA, 92697 USA
- Institute for Genomics and Bioinformatics
| | - Priyam Tripathi
- Department of Medicine, University of California, Irvine, CA 92697, USA
| | - Talal Alkayali
- Department of Medicine, University of California, Irvine, CA 92697, USA
- H.H. Chao Comprehensive Digestive Disease Center, University of California, Irvine, Orange, CA 92868, USA
| | - Mohit Mittal
- Department of Medicine, University of California, Irvine, CA 92697, USA
| | - Farid Jalali
- Department of Medicine, University of California, Irvine, CA 92697, USA
- H.H. Chao Comprehensive Digestive Disease Center, University of California, Irvine, Orange, CA 92868, USA
| | - William Karnes
- Department of Medicine, University of California, Irvine, CA 92697, USA
- H.H. Chao Comprehensive Digestive Disease Center, University of California, Irvine, Orange, CA 92868, USA
| | - Pierre Baldi
- Department of Computer Science, University of California, Irvine, CA, 92697 USA
- Institute for Genomics and Bioinformatics
- Center for Machine Learning and Intelligent Systems
| |
Collapse
|
170
|
|
171
|
GTCreator: a flexible annotation tool for image-based datasets. Int J Comput Assist Radiol Surg 2018; 14:191-201. [PMID: 30255462 DOI: 10.1007/s11548-018-1864-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 09/12/2018] [Indexed: 12/14/2022]
Abstract
PURPOSE Methodology evaluation for decision support systems for health is a time-consuming task. To assess performance of polyp detection methods in colonoscopy videos, clinicians have to deal with the annotation of thousands of images. Current existing tools could be improved in terms of flexibility and ease of use. METHODS We introduce GTCreator, a flexible annotation tool for providing image and text annotations to image-based datasets. It keeps the main basic functionalities of other similar tools while extending other capabilities such as allowing multiple annotators to work simultaneously on the same task or enhanced dataset browsing and easy annotation transfer aiming to speed up annotation processes in large datasets. RESULTS The comparison with other similar tools shows that GTCreator allows to obtain fast and precise annotation of image datasets, being the only one which offers full annotation editing and browsing capabilites. CONCLUSION Our proposed annotation tool has been proven to be efficient for large image dataset annotation, as well as showing potential of use in other stages of method evaluation such as experimental setup or results analysis.
Collapse
|
172
|
Shin Y, Balasingham I. Automatic polyp frame screening using patch based combined feature and dictionary learning. Comput Med Imaging Graph 2018; 69:33-42. [PMID: 30172091 DOI: 10.1016/j.compmedimag.2018.08.001] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Revised: 07/29/2018] [Accepted: 08/13/2018] [Indexed: 12/15/2022]
Abstract
Polyps in the colon can potentially become malignant cancer tissues where early detection and removal lead to high survival rate. Certain types of polyps can be difficult to detect even for highly trained physicians. Inspired by aforementioned problem our study aims to improve the human detection performance by developing an automatic polyp screening framework as a decision support tool. We use a small image patch based combined feature method. Features include shape and color information and are extracted using histogram of oriented gradient and hue histogram methods. Dictionary learning based training is used to learn features and final feature vector is formed using sparse coding. For classification, we use patch image classification based on linear support vector machine and whole image thresholding. The proposed framework is evaluated using three public polyp databases. Our experimental results show that the proposed scheme successfully classified polyps and normal images with over 95% of classification accuracy, sensitivity, specificity and precision. In addition, we compare performance of the proposed scheme with conventional feature based methods and the convolutional neural network (CNN) based deep learning approach which is the state of the art technique in many image classification applications.
Collapse
Affiliation(s)
- Younghak Shin
- Department Electronic Systems at Norwegian University of Science and Technology (NTNU), Trondheim, Norway.
| | - Ilangko Balasingham
- Intervention Centre, Oslo University Hospital, Oslo NO-0027, Norway; Institute of Clinical Medicine, University of Oslo, and the Norwegian University of Science and Technology (NTNU), Norway.
| |
Collapse
|
173
|
Zheng Y, Zhang R, Yu R, Jiang Y, Mak TWC, Wong SH, Lau JYW, Poon CCY. Localisation of Colorectal Polyps by Convolutional Neural Network Features Learnt from White Light and Narrow Band Endoscopic Images of Multiple Databases. 2018 40TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC) 2018; 2018:4142-4145. [PMID: 30441267 DOI: 10.1109/embc.2018.8513337] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
174
|
Towards computer-assisted TTTS: Laser ablation detection for workflow segmentation from fetoscopic video. Int J Comput Assist Radiol Surg 2018; 13:1661-1670. [PMID: 29951938 PMCID: PMC6153674 DOI: 10.1007/s11548-018-1813-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 06/12/2018] [Indexed: 10/28/2022]
Abstract
PURPOSE Intrauterine foetal surgery is the treatment option for several congenital malformations. For twin-to-twin transfusion syndrome (TTTS), interventions involve the use of laser fibre to ablate vessels in a shared placenta. The procedure presents a number of challenges for the surgeon, and computer-assisted technologies can potentially be a significant support. Vision-based sensing is the primary source of information from the intrauterine environment, and hence, vision approaches present an appealing approach for extracting higher level information from the surgical site. METHODS In this paper, we propose a framework to detect one of the key steps during TTTS interventions-ablation. We adopt a deep learning approach, specifically the ResNet101 architecture, for classification of different surgical actions performed during laser ablation therapy. RESULTS We perform a two-fold cross-validation using almost 50 k frames from five different TTTS ablation procedures. Our results show that deep learning methods are a promising approach for ablation detection. CONCLUSION To our knowledge, this is the first attempt at automating photocoagulation detection using video and our technique can be an important component of a larger assistive framework for enhanced foetal therapies. The current implementation does not include semantic segmentation or localisation of the ablation site, and this would be a natural extension in future work.
Collapse
|
175
|
Zhao R, Zhang R, Tang T, Feng X, Li J, Liu Y, Zhu R, Wang G, Li K, Zhou W, Yang Y, Wang Y, Ba Y, Zhang J, Liu Y, Zhou F. TriZ-a rotation-tolerant image feature and its application in endoscope-based disease diagnosis. Comput Biol Med 2018; 99:182-190. [PMID: 29936284 DOI: 10.1016/j.compbiomed.2018.06.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Revised: 05/30/2018] [Accepted: 06/08/2018] [Indexed: 12/11/2022]
Abstract
Endoscopy is becoming one of the widely-used technologies to screen the gastric diseases, and it heavily relies on the experiences of the clinical endoscopists. The location, shape, and size are the typical patterns for the endoscopists to make the diagnosis decisions. The contrasting texture patterns also suggest the potential lesions. This study designed a novel rotation-tolerant image feature, TriZ, and demonstrated the effectiveness on both the rotation invariance and the lesion detection of three gastric lesion types, i.e., gastric polyp, gastric ulcer, and gastritis. TriZ achieved 87.0% in the four-class classification problem of the three gastric lesion types and the healthy controls, averaged over the twenty random runs of 10-fold cross-validations. Due to that biomedical imaging technologies may capture the lesion sites from different angles, the symmetric image feature extraction algorithm TriZ may facilitate the biomedical image based disease diagnosis modeling. Compared with the 378,434 features of the HOG algorithm, TriZ achieved a better accuracy using only 126 image features.
Collapse
Affiliation(s)
- Ruixue Zhao
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Ruochi Zhang
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Tongyu Tang
- First Hospital, Jilin University, Changchun, Jilin, 130012, China
| | - Xin Feng
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Jialiang Li
- College of Software, Jilin University, Changchun, Jilin, 130012, China
| | - Yue Liu
- College of Communication Engineering, Jilin University, Changchun, Jilin, 130012, China
| | - Renxiang Zhu
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Guangze Wang
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Kangning Li
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Wenyang Zhou
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Yunfei Yang
- College of Software, Jilin University, Changchun, Jilin, 130012, China
| | - Yuzhao Wang
- College of Software, Jilin University, Changchun, Jilin, 130012, China
| | - Yuanjie Ba
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Jiaojiao Zhang
- College of Software, Jilin University, Changchun, Jilin, 130012, China
| | - Yang Liu
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Fengfeng Zhou
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China.
| |
Collapse
|
176
|
Visentini-Scarzanella M, Kawasaki H, Furukawa R, Bonino MA, Arolfo S, Lo Secco G, Arezzo A, Menciassi A, Dario P, Ciuti G. A structured light laser probe for gastrointestinal polyp size measurement: a preliminary comparative study. Endosc Int Open 2018; 6:E602-E609. [PMID: 29756018 PMCID: PMC5943691 DOI: 10.1055/a-0577-2798] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Accepted: 01/25/2018] [Indexed: 12/18/2022] Open
Abstract
BACKGROUND AND STUDY AIMS Polyp size measurement is an important diagnostic step during gastrointestinal endoscopy, and is mainly performed by visual inspection. However, lack of depth perception and objective reference points are acknowledged factors contributing to measurement errors in polyp size. In this paper, we describe the proof-of-concept of a polyp measurement device based on structured light technology for future endoscopes. PATIENTS AND METHODS Measurement accuracy, time, user confidence, and satisfaction were evaluated for polyp size assessment by (a) visual inspection, (b) open biopsy forceps of known size, (c) ruled snare, and (d) structured light probe, for a total of 392 independent polyp measurements in ex vivo porcine stomachs. RESULTS Visual assessment resulted in a median estimation error of 2.2 mm, IQR = 2.6 mm. The proposed probe can reduce the error to 1.5 mm, IQR = 1.67 mm ( P = 0.002, 95 %CI) and its performance was found to be statistically similar to using forceps for reference ( P = 0.81, 95 %CI) or ruled snare ( P = 0.99, 95 %CI), while not occluding the tool channel. Timing performance with the probe was measured to be on average 54.75 seconds per polyp. This was significantly slower than visual assessment (20.7 seconds per polyp, P = 0.005, 95 %CI) but not significantly different from using a snare (68.5 seconds per polyp, P = 0.73, 95 %CI). However, the probe's timing performance was partly due to lens cleaning problems in our preliminary design. Reported average satisfaction on a 0 - 10 range was highest for the proposed probe (7.92), visual assessment (7.01), and reference forceps (7.82), while significantly lower for snare users with a score of 4.42 ( P = 0.035, 95 %CI). CONCLUSIONS The common practice of visual assessment of polyp size was found to be significantly less accurate than tool-based assessment, but easy to carry out. The proposed technology offers an accuracy on par with using a reference tool or ruled snare with the same satisfaction levels of visual assessment and without occluding the tool channel. Further study will improve the design to reduce the operating time by integrating the probe within the scope tip.
Collapse
Affiliation(s)
| | - Hiroshi Kawasaki
- Department of Advanced Information Technology, Kyushu University, Japan
| | - Ryo Furukawa
- Department of Intelligent Systems, Hiroshima City University, Japan
| | | | - Simone Arolfo
- Department of Surgical Sciences, University of Torino, Turin, Italy
| | - Giacomo Lo Secco
- Department of Surgical Sciences, University of Torino, Turin, Italy
| | - Alberto Arezzo
- Department of Surgical Sciences, University of Torino, Turin, Italy
| | | | - Paolo Dario
- The BioRobotics Institute, Scuola Superiore SantʼAnna, Pisa, Italy
| | - Gastone Ciuti
- The BioRobotics Institute, Scuola Superiore SantʼAnna, Pisa, Italy,Corresponding author Gastone Ciuti The BioRobotics InstituteScuola Superiore SantʼAnnaviale Rinaldo Piaggio 3456025 Pontedera (Pisa)Italy+39-050-883497
| |
Collapse
|
177
|
Brandao P, Zisimopoulos O, Mazomenos E, Ciuti G, Bernal J, Visentini-Scarzanella M, Menciassi A, Dario P, Koulaouzidis A, Arezzo A, Hawkes DJ, Stoyanov D. Towards a Computed-Aided Diagnosis System in Colonoscopy: Automatic Polyp Segmentation Using Convolution Neural Networks. ACTA ACUST UNITED AC 2018. [DOI: 10.1142/s2424905x18400020] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Early diagnosis is essential for the successful treatment of bowel cancers including colorectal cancer (CRC), and capsule endoscopic imaging with robotic actuation can be a valuable diagnostic tool when combined with automated image analysis. We present a deep learning rooted detection and segmentation framework for recognizing lesions in colonoscopy and capsule endoscopy images. We restructure established convolution architectures, such as VGG and ResNets, by converting them into fully-connected convolution networks (FCNs), fine-tune them and study their capabilities for polyp segmentation and detection. We additionally use shape-from-shading (SfS) to recover depth and provide a richer representation of the tissue’s structure in colonoscopy images. Depth is incorporated into our network models as an additional input channel to the RGB information and we demonstrate that the resulting network yields improved performance. Our networks are tested on publicly available datasets and the most accurate segmentation model achieved a mean segmentation interception over union (IU) of 47.78% and 56.95% on the ETIS-Larib and CVC-Colon datasets, respectively. For polyp detection, the top performing models we propose surpass the current state-of-the-art with detection recalls superior to 90% for all datasets tested. To our knowledge, we present the first work to use FCNs for polyp segmentation in addition to proposing a novel combination of SfS and RGB that boosts performance.
Collapse
Affiliation(s)
- Patrick Brandao
- Centre for Medical Image Computing, University College London, London, UK
| | | | | | - Gastone Ciuti
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
| | - Jorge Bernal
- Department of Computer Science Universitat Autnoma de Barcelona, Barcelona, Spain
| | | | | | - Paolo Dario
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
| | | | - Alberto Arezzo
- Department of Surgical Sciences, University of Turin, Turin, Italy
| | - David J Hawkes
- Centre for Medical Image Computing, University College London, London, UK
| | - Danail Stoyanov
- Centre for Medical Image Computing, University College London, London, UK
| |
Collapse
|
178
|
Gastrointestinal polyp detection in endoscopic images using an improved feature extraction method. Biomed Eng Lett 2017; 8:69-75. [PMID: 30603191 DOI: 10.1007/s13534-017-0048-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2017] [Revised: 07/22/2017] [Accepted: 08/28/2017] [Indexed: 12/18/2022] Open
Abstract
Gastrointestinal polyps are treated as the precursors of cancer development. So, possibility of cancers can be reduced at a great extent by early detection and removal of polyps. The most used diagnostic modality for gastrointestinal polyps is video endoscopy. But, as an operator dependant procedure, several human factors can lead to miss detection of polyps. In this peper, an improved computer aided polyp detection method has been proposed. Proposed improved method can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention. Color wavelet features and convolutional neural network features are extracted from endoscopic images, which are used for training a support vector machine. Then a target endoscopic image will be given to the classifier as input in order to find whether it contains any polyp or not. If polyp is found, it will be marked automatically. Experiment shows that, color wavelet features and convolutional neural network features together construct a highly representative of endoscopic polyp images. Evaluations on standard public databases show that, proposed system outperforms state-of-the-art methods, gaining accuracy of 98.34%, sensitivity of 98.67% and specificity of 98.23%. In this paper, the strength of color wavelet features and power of convolutional neural network features are combined. Fusion of these two methodology and use of support vector machine results in an improved method for gastrointestinal polyp detection. An analysis of ROC reveals that, proposed method can be used for polyp detection purposes with greater accuracy than state-of-the-art methods.
Collapse
|
179
|
An Automatic Gastrointestinal Polyp Detection System in Video Endoscopy Using Fusion of Color Wavelet and Convolutional Neural Network Features. Int J Biomed Imaging 2017; 2017:9545920. [PMID: 28894460 PMCID: PMC5574296 DOI: 10.1155/2017/9545920] [Citation(s) in RCA: 71] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2017] [Accepted: 07/12/2017] [Indexed: 02/08/2023] Open
Abstract
Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW) features and convolutional neural network (CNN) features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM). Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%.
Collapse
|
180
|
Vázquez D, Bernal J, Sánchez FJ, Fernández-Esparrach G, López AM, Romero A, Drozdzal M, Courville A. A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images. JOURNAL OF HEALTHCARE ENGINEERING 2017; 2017:4037190. [PMID: 29065595 PMCID: PMC5549472 DOI: 10.1155/2017/4037190] [Citation(s) in RCA: 165] [Impact Index Per Article: 20.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Accepted: 05/22/2017] [Indexed: 01/08/2023]
Abstract
Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs). We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.
Collapse
Affiliation(s)
- David Vázquez
- Computer Vision Center, Computer Science Department, Universitat Autonoma de Barcelona, Barcelona, Spain
- Montreal Institute for Learning Algorithms, Université de Montréal, Montreal, QC, Canada
| | - Jorge Bernal
- Computer Vision Center, Computer Science Department, Universitat Autonoma de Barcelona, Barcelona, Spain
| | - F. Javier Sánchez
- Computer Vision Center, Computer Science Department, Universitat Autonoma de Barcelona, Barcelona, Spain
| | - Gloria Fernández-Esparrach
- Endoscopy Unit, Gastroenterology Service, CIBERHED, IDIBAPS, Hospital Clinic, Universidad de Barcelona, Barcelona, Spain
| | - Antonio M. López
- Computer Vision Center, Computer Science Department, Universitat Autonoma de Barcelona, Barcelona, Spain
- Montreal Institute for Learning Algorithms, Université de Montréal, Montreal, QC, Canada
| | - Adriana Romero
- Montreal Institute for Learning Algorithms, Université de Montréal, Montreal, QC, Canada
| | - Michal Drozdzal
- École Polytechnique de Montréal, Montréal, QC, Canada
- Imagia Inc., Montréal, QC, Canada
| | - Aaron Courville
- Montreal Institute for Learning Algorithms, Université de Montréal, Montreal, QC, Canada
| |
Collapse
|
181
|
Towards Real-Time Polyp Detection in Colonoscopy Videos: Adapting Still Frame-Based Methodologies for Video Sequences Analysis. LECTURE NOTES IN COMPUTER SCIENCE 2017. [DOI: 10.1007/978-3-319-67543-5_3] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|