201
|
Chuquimia O, Pinna A, Dray X, Granado B. A Low Power and Real-Time Architecture for Hough Transform Processing Integration in a Full HD-Wireless Capsule Endoscopy. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2020; 14:646-657. [PMID: 32746352 DOI: 10.1109/tbcas.2020.3008458] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
We propose a new paradigm of a smart wireless endoscopic capsule (WCE) that has the ability to select suspicious images containing a polyp before sending them outside the body. To do so, we have designed an image processing system to select images with Regions Of Interest (ROI) containing a polyp. The criterion used to select an ROI is based on the polyp's shape. We use the Hough Transform (HT), a widely used shape-based algorithm for object detection and localization, to make this selection. In this paper, we present a new algorithm to compute in real-time the Hough Transform of high definition images (1920 x 1080 pixels). This algorithm has been designed to be integrated inside a WCE where there are specific constraints: a limited area and a limited amount of energy. To validate our algorithm, we have realized tests using a dataset containing synthetic images, real images, and endoscopic images with polyps. Results have shown that our algorithm is capable to detect circular shapes in synthetic and real images, but also can detect circles with an irregular contour, like that of polyps. We have implemented our architecture and validated it in a Xilinx Spartan 7 FPGA device, with an area of [Formula: see text], which is compatible with integration inside a WCE. This architecture runs at 132 MHz with an estimated power consumption of 76 mW and can work close to 10 hours. To improve the capacity of our architecture, we have also made an ASIC estimation, that let our architecture work at 125 MHz, with a power consumption of only 17.2 mW and a duration of approximately 50 hours.
Collapse
|
202
|
Wang W, Tian J, Zhang C, Luo Y, Wang X, Li J. An improved deep learning approach and its applications on colonic polyp images detection. BMC Med Imaging 2020; 20:83. [PMID: 32698839 PMCID: PMC7374886 DOI: 10.1186/s12880-020-00482-3] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Accepted: 07/08/2020] [Indexed: 12/22/2022] Open
Abstract
Background Colonic polyps are more likely to be cancerous, especially those with large diameter, large number and atypical hyperplasia. If colonic polyps cannot be treated in early stage, they are likely to develop into colon cancer. Colonoscopy is easily limited by the operator’s experience, and factors such as inexperience and visual fatigue will directly affect the accuracy of diagnosis. Cooperating with Hunan children’s hospital, we proposed and improved a deep learning approach with global average pooling (GAP) in colonoscopy for assisted diagnosis. Our approach for assisted diagnosis in colonoscopy can prompt endoscopists to pay attention to polyps that may be ignored in real time, improve the detection rate, reduce missed diagnosis, and improve the efficiency of medical diagnosis. Methods We selected colonoscopy images from the gastrointestinal endoscopy room of Hunan children’s hospital to form the colonic polyp datasets. And we applied the image classification method based on Deep Learning to the classification of Colonic Polyps. The classic networks we used are VGGNets and ResNets. By using global average pooling, we proposed the improved approaches: VGGNets-GAP and ResNets-GAP. Results The accuracies of all models in datasets exceed 98%. The TPR and TNR are above 96 and 98% respectively. In addition, VGGNets-GAP networks not only have high classification accuracies, but also have much fewer parameters than those of VGGNets. Conclusions The experimental results show that the proposed approach has good effect on the automatic detection of colonic polyps. The innovations of our method are in two aspects: (1) the detection accuracy of colonic polyps has been improved. (2) our approach reduces the memory consumption and makes the model lightweight. Compared with the original VGG networks, the parameters of our VGG19-GAP networks are greatly reduced.
Collapse
Affiliation(s)
- Wei Wang
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China.
| | - Jinge Tian
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China
| | - Chengwen Zhang
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China
| | - Yanhong Luo
- Hunan Children's Hospital, Changsha, 410000, China.
| | - Xin Wang
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China.
| | - Ji Li
- School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China
| |
Collapse
|
203
|
Thambawita V, Jha D, Hammer HL, Johansen HD, Johansen D, Halvorsen P, Riegler MA. An Extensive Study on Cross-Dataset Bias and Evaluation Metrics Interpretation for Machine Learning Applied to Gastrointestinal Tract Abnormality Classification. ACTA ACUST UNITED AC 2020. [DOI: 10.1145/3386295] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Precise and efficient automated identification of gastrointestinal (GI) tract diseases can help doctors treat more patients and improve the rate of disease detection and identification. Currently, automatic analysis of diseases in the GI tract is a hot topic in both computer science and medical-related journals. Nevertheless, the evaluation of such an automatic analysis is often incomplete or simply wrong. Algorithms are often only tested on small and biased datasets, and cross-dataset evaluations are rarely performed. A clear understanding of evaluation metrics and machine learning models with cross datasets is crucial to bring research in the field to a new quality level. Toward this goal, we present comprehensive evaluations of five distinct machine learning models using global features and deep neural networks that can classify 16 different key types of GI tract conditions, including pathological findings, anatomical landmarks, polyp removal conditions, and normal findings from images captured by common GI tract examination instruments. In our evaluation, we introduce performance hexagons using six performance metrics, such as recall, precision, specificity, accuracy, F1-score, and the Matthews correlation coefficient to demonstrate how to determine the real capabilities of models rather than evaluating them shallowly. Furthermore, we perform cross-dataset evaluations using different datasets for training and testing. With these cross-dataset evaluations, we demonstrate the challenge of actually building a generalizable model that could be used across different hospitals. Our experiments clearly show that more sophisticated performance metrics and evaluation methods need to be applied to get reliable models rather than depending on evaluations of the splits of the same dataset—that is, the performance metrics should always be interpreted together rather than relying on a single metric.
Collapse
Affiliation(s)
| | - Debesh Jha
- SimulaMet and UiT—The Arctic University of Norway, Tromsø, Norway
| | | | | | - Dag Johansen
- UiT—The Arctic University of Norway, Tromsø, Norway
| | - Pål Halvorsen
- SimulaMet and Oslo Metropolitan University, Oslo, Norway
| | | |
Collapse
|
204
|
Affiliation(s)
- Mohammad Bilal
- Center for Advanced Endoscopy, Division of Gastroenterology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - Jeremy R Glissen Brown
- Center for Advanced Endoscopy, Division of Gastroenterology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - Tyler M Berzin
- Center for Advanced Endoscopy, Division of Gastroenterology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
205
|
Yang Q, Guo Y, Ou X, Wang J, Hu C. Automatic T Staging Using Weakly Supervised Deep Learning for Nasopharyngeal Carcinoma on MR Images. J Magn Reson Imaging 2020; 52:1074-1082. [PMID: 32583578 DOI: 10.1002/jmri.27202] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 05/07/2020] [Accepted: 05/07/2020] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Recent studies have shown that deep learning can help tumor staging automatically. However, automatic nasopharyngeal carcinoma (NPC) staging is difficult due to the lack of large and slice-level annotated datasets. PURPOSE To develop a weakly-supervised deep-learning method to predict NPC patients' T stage without additional annotations. STUDY TYPE Retrospective. POPULATION/SUBJECTS In all, 1138 cases with NPC from 2010 to 2012 were enrolled, including a training set (n = 712) and a validation set (n = 426). FIELD STRENGTH/SEQUENCE 1.5T, T1 -weighted images (T1 WI), T2 -weighted images (T2 WI), contrast-enhanced T1 -weighted images (CE-T1 WI). ASSESSMENT We used a weakly-supervised deep-learning network to achieve automated T staging of NPC. T usually refers to the size and extent of the main tumor. The training set was employed to construct the deep-learning model. The performance of the automated T staging model was evaluated in the validation set. The accuracy of the model was assessed by the receiver operating characteristic (ROC) curve. To further assess the performance of the deep-learning-based T score, the progression-free survival (PFS) and overall survival (OS) were performed. STATISTICAL TESTS The Sklearn package in Python was applied to calculate the area under the curve (AUC) of the ROC. The survcomp package was used for calculations and comparisons between C-indexes. The software SPSS was employed to conduct survival analysis and chi-square tests. RESULTS The accuracy of the deep-learning model was 75.59% in the validation set. The average AUC of the ROC curve of different stages was 0.943. There were no significant differences in the C-indexes of PFS and OS from the deep-learning model and those from TNM staging, with P values of 0.301 and 0.425, respectively. DATA CONCLUSION This weakly-supervised deep-learning approach can perform fully automated T staging of NPC and achieve good prognostic performance. LEVEL OF EVIDENCE 3 Technical Efficacy Stage: 2 J. Magn. Reson. Imaging 2020;52:1074-1082.
Collapse
Affiliation(s)
- Qing Yang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Ying Guo
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Xiaomin Ou
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Chaosu Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| |
Collapse
|
206
|
Guo X, Yuan Y. Semi-supervised WCE image classification with adaptive aggregated attention. Med Image Anal 2020; 64:101733. [PMID: 32574987 DOI: 10.1016/j.media.2020.101733] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 04/01/2020] [Accepted: 05/22/2020] [Indexed: 02/08/2023]
Abstract
Accurate abnormality classification in Wireless Capsule Endoscopy (WCE) images is crucial for early gastrointestinal (GI) tract cancer diagnosis and treatment, while it remains challenging due to the limited annotated dataset, the huge intra-class variances and the high degree of inter-class similarities. To tackle these dilemmas, we propose a novel semi-supervised learning method with Adaptive Aggregated Attention (AAA) module for automatic WCE image classification. Firstly, a novel deformation field based image preprocessing strategy is proposed to remove the black background and circular boundaries in WCE images. Then we propose a synergic network to learn discriminative image features, consisting of two branches: an abnormal regions estimator (the first branch) and an abnormal information distiller (the second branch). The first branch utilizes the proposed AAA module to capture global dependencies and incorporate context information to highlight the most meaningful regions, while the second branch mainly focuses on these calculated attention regions for accurate and robust abnormality classification. Finally, these two branches are jointly optimized by minimizing the proposed discriminative angular (DA) loss and Jensen-Shannon divergence (JS) loss with labeled data as well as unlabeled data. Comprehensive experiments have been conducted on the public CAD-CAP WCE dataset. The proposed method achieves 93.17% overall accuracy in a fourfold cross-validation, verifying its effectiveness for WCE image classification. The source code is available at https://github.com/Guo-Xiaoqing/SSL_WCE.
Collapse
Affiliation(s)
- Xiaoqing Guo
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
| | - Yixuan Yuan
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
207
|
Mostafiz R, Rahman MM, Uddin MS. Gastrointestinal polyp classification through empirical mode decomposition and neural features. SN APPLIED SCIENCES 2020. [DOI: 10.1007/s42452-020-2944-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
|
208
|
Real-time detection of colon polyps during colonoscopy using deep learning: systematic validation with four independent datasets. Sci Rep 2020; 10:8379. [PMID: 32433506 PMCID: PMC7239848 DOI: 10.1038/s41598-020-65387-1] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Accepted: 04/28/2020] [Indexed: 01/06/2023] Open
Abstract
We developed and validated a deep-learning algorithm for polyp detection. We used a YOLOv2 to develop the algorithm for automatic polyp detection on 8,075 images (503 polyps). We validated the algorithm using three datasets: A: 1,338 images with 1,349 polyps; B: an open, public CVC-clinic database with 612 polyp images; and C: 7 colonoscopy videos with 26 polyps. To reduce the number of false positives in the video analysis, median filtering was applied. We tested the algorithm performance using 15 unaltered colonoscopy videos (dataset D). For datasets A and B, the per-image polyp detection sensitivity was 96.7% and 90.2%, respectively. For video study (dataset C), the per-image polyp detection sensitivity was 87.7%. False positive rates were 12.5% without a median filter and 6.3% with a median filter with a window size of 13. For dataset D, the sensitivity and false positive rate were 89.3% and 8.3%, respectively. The algorithm detected all 38 polyps that the endoscopists detected and 7 additional polyps. The operation speed was 67.16 frames per second. The automatic polyp detection algorithm exhibited good performance, as evidenced by the high detection sensitivity and rapid processing. Our algorithm may help endoscopists improve polyp detection.
Collapse
|
209
|
Poon CCY, Jiang Y, Zhang R, Lo WWY, Cheung MSH, Yu R, Zheng Y, Wong JCT, Liu Q, Wong SH, Mak TWC, Lau JYW. AI-doscopist: a real-time deep-learning-based algorithm for localising polyps in colonoscopy videos with edge computing devices. NPJ Digit Med 2020; 3:73. [PMID: 32435701 PMCID: PMC7235017 DOI: 10.1038/s41746-020-0281-z] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Accepted: 04/28/2020] [Indexed: 12/24/2022] Open
Abstract
We have designed a deep-learning model, an "Artificial Intelligent Endoscopist (a.k.a. AI-doscopist)", to localise colonic neoplasia during colonoscopy. This study aims to evaluate the agreement between endoscopists and AI-doscopist for colorectal neoplasm localisation. AI-doscopist was pre-trained by 1.2 million non-medical images and fine-tuned by 291,090 colonoscopy and non-medical images. The colonoscopy images were obtained from six databases, where the colonoscopy images were classified into 13 categories and the polyps' locations were marked image-by-image by the smallest bounding boxes. Seven categories of non-medical images, which were believed to share some common features with colorectal polyps, were downloaded from an online search engine. Written informed consent were obtained from 144 patients who underwent colonoscopy and their full colonoscopy videos were prospectively recorded for evaluation. A total of 128 suspicious lesions were resected or biopsied for histological confirmation. When evaluated image-by-image on the 144 full colonoscopies, the specificity of AI-doscopist was 93.3%. AI-doscopist were able to localise 124 out of 128 polyps (polyp-based sensitivity = 96.9%). Furthermore, after reviewing the suspected regions highlighted by AI-doscopist in a 102-patient cohort, an endoscopist has high confidence in recognizing four missed polyps in three patients who were not diagnosed with any lesion during their original colonoscopies. In summary, AI-doscopist can localise 96.9% of the polyps resected by the endoscopists. If AI-doscopist were to be used in real-time, it can potentially assist endoscopists in detecting one more patient with polyp in every 20-33 colonoscopies.
Collapse
Affiliation(s)
- Carmen C. Y. Poon
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Yuqi Jiang
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Ruikai Zhang
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Winnie W. Y. Lo
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Maggie S. H. Cheung
- Division of Vascular and General Surgery, Department of Surgery, Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Ruoxi Yu
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Yali Zheng
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, People’s Republic of China
| | - John C. T. Wong
- Division of Gastroenterology and Hepatology, Department of Medicine and Therapeutics, Institute of Digestive Disease, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Qing Liu
- Department of Electrical and Electronic Engineering, Xi’an Jiaotong-Liverpool University, Suzhou, People’s Republic of China
| | - Sunny H. Wong
- Division of Gastroenterology and Hepatology, Department of Medicine and Therapeutics, Institute of Digestive Disease, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Tony W. C. Mak
- Division of Colorectal Surgery, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - James Y. W. Lau
- Division of Vascular and General Surgery, Department of Surgery, Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| |
Collapse
|
210
|
Biniaz A, Zoroofi RA, Sohrabi MR. Automatic reduction of wireless capsule endoscopy reviewing time based on factorization analysis. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101897] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
211
|
Hoogenboom SA, Bagci U, Wallace MB. Artificial intelligence in gastroenterology. The current state of play and the potential. How will it affect our practice and when? ACTA ACUST UNITED AC 2020. [DOI: 10.1016/j.tgie.2019.150634] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
212
|
Zachariah R, Ninh A, Karnes W. Artificial intelligence for colon polyp detection: Why should we embrace this? ACTA ACUST UNITED AC 2020. [DOI: 10.1016/j.tgie.2019.150631] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
213
|
Abadir AP, Ali MF, Karnes W, Samarasena JB. Artificial Intelligence in Gastrointestinal Endoscopy. Clin Endosc 2020; 53:132-141. [PMID: 32252506 PMCID: PMC7137570 DOI: 10.5946/ce.2020.038] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Accepted: 03/17/2020] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is rapidly integrating into modern technology and clinical practice. Although in its nascency, AI has become a hot topic of investigation for applications in clinical practice. Multiple fields of medicine have embraced the possibility of a future with AI assisting in diagnosis and pathology applications. In the field of gastroenterology, AI has been studied as a tool to assist in risk stratification, diagnosis, and pathologic identification. Specifically, AI has become of great interest in endoscopy as a technology with substantial potential to revolutionize the practice of a modern gastroenterologist. From cancer screening to automated report generation, AI has touched upon all aspects of modern endoscopy. Here, we review landmark AI developments in endoscopy. Starting with broad definitions to develop understanding, we will summarize the current state of AI research and its potential applications. With innovation developing rapidly, this article touches upon the remarkable advances in AI-assisted endoscopy since its initial evaluation at the turn of the millennium, and the potential impact these AI models may have on the modern clinical practice. As with any discussion of new technology, its limitations must also be understood to apply clinical AI tools successfully.
Collapse
Affiliation(s)
- Alexander P Abadir
- Department of Medicine, University of California Irvine, Orange, CA, USA
| | - Mohammed Fahad Ali
- Department of Medicine, University of California Irvine, Orange, CA, USA
| | - William Karnes
- Division of Gastroenterology & Hepatology, Department of Medicine, H. H. Chao Comprehensive Digestive Disease Center, University of California Irvine, Orange, CA, USA
| | - Jason B Samarasena
- Division of Gastroenterology & Hepatology, Department of Medicine, H. H. Chao Comprehensive Digestive Disease Center, University of California Irvine, Orange, CA, USA
| |
Collapse
|
214
|
Hoerter N, Gross SA, Liang PS. Artificial Intelligence and Polyp Detection. CURRENT TREATMENT OPTIONS IN GASTROENTEROLOGY 2020; 18:120-136. [PMID: 31960282 PMCID: PMC7371513 DOI: 10.1007/s11938-020-00274-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
PURPOSE OF REVIEW This review highlights the history, recent advances, and ongoing challenges of artificial intelligence (AI) technology in colonic polyp detection. RECENT FINDINGS Hand-crafted AI algorithms have recently given way to convolutional neural networks with the ability to detect polyps in real-time. The first randomized controlled trial comparing an AI system to standard colonoscopy found a 9% increase in adenoma detection rate, but the improvement was restricted to polyps smaller than 10 mm and the results need validation. As this field rapidly evolves, important issues to consider include standardization of outcomes, dataset availability, real-world applications, and regulatory approval. SUMMARY AI has shown great potential for improving colonic polyp detection while requiring minimal training for endoscopists. The question of when AI will enter endoscopic practice depends on whether the technology can be integrated into existing hardware and an assessment of its added value for patient care.
Collapse
Affiliation(s)
| | | | - Peter S Liang
- NYU Langone Health, New York, NY, USA.
- VA New York Harbor Health Care System, New York, NY, USA.
| |
Collapse
|
215
|
Affiliation(s)
- Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan.
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| |
Collapse
|
216
|
Ruano J, Barrera C, Bravo D, Gomez M, Romero E. Localization of Small Neoplastic Lesions in Colonoscopy by Estimating Edge, Texture and Motion Saliency. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:5945-5948. [PMID: 31947202 DOI: 10.1109/embc.2019.8856864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Early screening in Colorectal Cancer consists in finding and removing small precancerous masses or neoplastic lesions developed from the mucosa, usually lesions smaller than 10 mm. Localization of small neoplastic lesions is a very challenging task since colon exploration is highly dependent on the expert training and colon preparation. Several strategies have attempted to locate neoplasias, but usually they are huge lesions that a trained gastroenterologist could hardly miss. This work presents a saliency-based strategy to localize polypoid and non-polypoid neoplastic lesions smaller than 10 mm in colonoscopy videos by combining spatio-temporal descriptors. For doing so, a per-frame-multi-scale representation is computed and edge, texture and motion features are extracted. Each of these features is used to construct a primary saliency map which are then combined to obtain a coarse saliency map. Finally, the neoplasia is localized as the bounding box of a circular region, approximated by the Hough transform, with the largest salience. The proposed approach was evaluated in 8 short colonoscopy videos obtaining an average of Annotated Area Covered of 0.75 and a precision of 0.82.
Collapse
|
217
|
Jha D, Smedsrud PH, Riegler MA, Halvorsen P, de Lange T, Johansen D, Johansen HD. Kvasir-SEG: A Segmented Polyp Dataset. MULTIMEDIA MODELING 2020. [DOI: 10.1007/978-3-030-37734-2_37] [Citation(s) in RCA: 172] [Impact Index Per Article: 34.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
218
|
Deeba F, Bui FM, Wahid KA. Computer-aided polyp detection based on image enhancement and saliency-based selection. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.04.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
219
|
Jia X, Xing X, Yuan Y, Xing L, Meng MQH. Wireless Capsule Endoscopy: A New Tool for Cancer Screening in the Colon With Deep-Learning-Based Polyp Recognition. PROCEEDINGS OF THE IEEE 2020; 108:178-197. [DOI: 10.1109/jproc.2019.2950506] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
|
220
|
Wickstrøm K, Kampffmeyer M, Jenssen R. Uncertainty and interpretability in convolutional neural networks for semantic segmentation of colorectal polyps. Med Image Anal 2019; 60:101619. [PMID: 31810005 DOI: 10.1016/j.media.2019.101619] [Citation(s) in RCA: 61] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 11/14/2019] [Accepted: 11/14/2019] [Indexed: 12/27/2022]
Abstract
Colorectal polyps are known to be potential precursors to colorectal cancer, which is one of the leading causes of cancer-related deaths on a global scale. Early detection and prevention of colorectal cancer is primarily enabled through manual screenings, where the intestines of a patient is visually examined. Such a procedure can be challenging and exhausting for the person performing the screening. This has resulted in numerous studies on designing automatic systems aimed at supporting physicians during the examination. Recently, such automatic systems have seen a significant improvement as a result of an increasing amount of publicly available colorectal imagery and advances in deep learning research for object image recognition. Specifically, decision support systems based on Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance on both detection and segmentation of colorectal polyps. However, CNN-based models need to not only be precise in order to be helpful in a medical context. In addition, interpretability and uncertainty in predictions must be well understood. In this paper, we develop and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. Furthermore, we propose a novel method for estimating the uncertainty associated with important features in the input and demonstrate how interpretability and uncertainty can be modeled in DSSs for semantic segmentation of colorectal polyps. Results indicate that deep models are utilizing the shape and edge information of polyps to make their prediction. Moreover, inaccurate predictions show a higher degree of uncertainty compared to precise predictions.
Collapse
Affiliation(s)
- Kristoffer Wickstrøm
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø NO-9037, Norway.
| | - Michael Kampffmeyer
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø NO-9037, Norway
| | - Robert Jenssen
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø NO-9037, Norway
| |
Collapse
|
221
|
Sakai Y, Takemoto S, Hori K, Nishimura M, Ikematsu H, Yano T, Yokota H. Automatic detection of early gastric cancer in endoscopic images using a transferring convolutional neural network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:4138-4141. [PMID: 30441266 DOI: 10.1109/embc.2018.8513274] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Endoscopic image diagnosis assisted by machine learning is useful for reducing misdetection and interobserver variability. Although many results have been reported, few effective methods are available to automatically detect early gastric cancer. Early gastric cancer have poor morphological features, which implies that automatic detection methods can be extremely difficult to construct. In this study, we proposed a convolutional neural network-based automatic detection scheme to assist the diagnosis of early gastric cancer in endoscopic images. We performed transfer learning using two classes (cancer and normal) of image datasets that have detailed texture information on lesions derived from a small number of annotated images. The accuracy of our trained network was 87.6%, and the sensitivity and specificity were well balanced, which is important for future practical use. We also succeeded in presenting a candidate region of early gastric cancer as a heat map of unknown images. The detection accuracy was 82.8%. This means that our proposed scheme may offer substantial assistance to endoscopists in decision making.
Collapse
|
222
|
Yamada M, Saito Y, Imaoka H, Saiko M, Yamada S, Kondo H, Takamaru H, Sakamoto T, Sese J, Kuchiba A, Shibata T, Hamamoto R. Development of a real-time endoscopic image diagnosis support system using deep learning technology in colonoscopy. Sci Rep 2019; 9:14465. [PMID: 31594962 PMCID: PMC6783454 DOI: 10.1038/s41598-019-50567-5] [Citation(s) in RCA: 135] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Accepted: 09/04/2019] [Indexed: 12/11/2022] Open
Abstract
Gaps in colonoscopy skills among endoscopists, primarily due to experience, have been identified, and solutions are critically needed. Hence, the development of a real-time robust detection system for colorectal neoplasms is considered to significantly reduce the risk of missed lesions during colonoscopy. Here, we develop an artificial intelligence (AI) system that automatically detects early signs of colorectal cancer during colonoscopy; the AI system shows the sensitivity and specificity are 97.3% (95% confidence interval [CI] = 95.9%–98.4%) and 99.0% (95% CI = 98.6%–99.2%), respectively, and the area under the curve is 0.975 (95% CI = 0.964–0.986) in the validation set. Moreover, the sensitivities are 98.0% (95% CI = 96.6%–98.8%) in the polypoid subgroup and 93.7% (95% CI = 87.6%–96.9%) in the non-polypoid subgroup; To accelerate the detection, tensor metrics in the trained model was decomposed, and the system can predict cancerous regions 21.9 ms/image on average. These findings suggest that the system is sufficient to support endoscopists in the high detection against non-polypoid lesions, which are frequently missed by optical colonoscopy. This AI system can alert endoscopists in real-time to avoid missing abnormalities such as non-polypoid polyps during colonoscopy, improving the early detection of this disease.
Collapse
Affiliation(s)
- Masayoshi Yamada
- Endoscopy Division, National Cancer Center Hospital, Tokyo, Japan. .,Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, Tokyo, Japan.
| | - Yutaka Saito
- Endoscopy Division, National Cancer Center Hospital, Tokyo, Japan
| | - Hitoshi Imaoka
- Biometrics Research Laboratories, NEC Corporation, Kanagawa, Japan
| | - Masahiro Saiko
- Biometrics Research Laboratories, NEC Corporation, Kanagawa, Japan
| | - Shigemi Yamada
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, Tokyo, Japan.,Advanced Intelligence Project Center, RIKEN, Tokyo, Japan
| | - Hiroko Kondo
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, Tokyo, Japan.,Advanced Intelligence Project Center, RIKEN, Tokyo, Japan
| | | | - Taku Sakamoto
- Endoscopy Division, National Cancer Center Hospital, Tokyo, Japan
| | - Jun Sese
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Tokyo, Japan
| | - Aya Kuchiba
- Biostatistics Division, National Cancer Center, Tokyo, Japan
| | - Taro Shibata
- Biostatistics Division, National Cancer Center, Tokyo, Japan
| | - Ryuji Hamamoto
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, Tokyo, Japan.,Advanced Intelligence Project Center, RIKEN, Tokyo, Japan
| |
Collapse
|
223
|
Wang P, Berzin TM, Glissen Brown JR, Bharadwaj S, Becq A, Xiao X, Liu P, Li L, Song Y, Zhang D, Li Y, Xu G, Tu M, Liu X. Real-time automatic detection system increases colonoscopic polyp and adenoma detection rates: a prospective randomised controlled study. Gut 2019; 68:1813-1819. [PMID: 30814121 PMCID: PMC6839720 DOI: 10.1136/gutjnl-2018-317500] [Citation(s) in RCA: 520] [Impact Index Per Article: 86.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2018] [Revised: 02/04/2019] [Accepted: 02/13/2019] [Indexed: 12/12/2022]
Abstract
OBJECTIVE The effect of colonoscopy on colorectal cancer mortality is limited by several factors, among them a certain miss rate, leading to limited adenoma detection rates (ADRs). We investigated the effect of an automatic polyp detection system based on deep learning on polyp detection rate and ADR. DESIGN In an open, non-blinded trial, consecutive patients were prospectively randomised to undergo diagnostic colonoscopy with or without assistance of a real-time automatic polyp detection system providing a simultaneous visual notice and sound alarm on polyp detection. The primary outcome was ADR. RESULTS Of 1058 patients included, 536 were randomised to standard colonoscopy, and 522 were randomised to colonoscopy with computer-aided diagnosis. The artificial intelligence (AI) system significantly increased ADR (29.1%vs20.3%, p<0.001) and the mean number of adenomas per patient (0.53vs0.31, p<0.001). This was due to a higher number of diminutive adenomas found (185vs102; p<0.001), while there was no statistical difference in larger adenomas (77vs58, p=0.075). In addition, the number of hyperplastic polyps was also significantly increased (114vs52, p<0.001). CONCLUSIONS In a low prevalent ADR population, an automatic polyp detection system during colonoscopy resulted in a significant increase in the number of diminutive adenomas detected, as well as an increase in the rate of hyperplastic polyps. The cost-benefit ratio of such effects has to be determined further. TRIAL REGISTRATION NUMBER ChiCTR-DDD-17012221; Results.
Collapse
Affiliation(s)
- Pu Wang
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Tyler M Berzin
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Jeremy Romek Glissen Brown
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Shishira Bharadwaj
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Aymeric Becq
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | - Xun Xiao
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Peixi Liu
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Liangping Li
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Yan Song
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Di Zhang
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Yi Li
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Guangre Xu
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Mengtian Tu
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| | - Xiaogang Liu
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, Chengdu, China
| |
Collapse
|
224
|
Akbari M, Mohrekesh M, Nasr-Esfahani E, Soroushmehr SMR, Karimi N, Samavi S, Najarian K. Polyp Segmentation in Colonoscopy Images Using Fully Convolutional Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:69-72. [PMID: 30440343 DOI: 10.1109/embc.2018.8512197] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Colorectal cancer is one of the highest causes of cancer-related death, especially in men. Polyps are one of the main causes of colorectal cancer, and early diagnosis of polyps by colonoscopy could result in successful treatment. Diagnosis of polyps in colonoscopy videos is a challenging task due to variations in the size and shape of polyps. In this paper, we proposed a polyp segmentation method based on the convolutional neural network. Two strategies enhance the performance of the method. First, we perform a novel image patch selection method in the training phase of the network. Second, in the test phase, we perform effective post-processing on the probability map that is produced by the network. Evaluation of the proposed method using the CVC-ColonDB database shows that our proposed method achieves more accurate results in comparison with previous colonoscopy video-segmentation methods.
Collapse
|
225
|
Wang Z, Meng Y, Weng F, Chen Y, Lu F, Liu X, Hou M, Zhang J. An Effective CNN Method for Fully Automated Segmenting Subcutaneous and Visceral Adipose Tissue on CT Scans. Ann Biomed Eng 2019; 48:312-328. [DOI: 10.1007/s10439-019-02349-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2019] [Accepted: 08/18/2019] [Indexed: 12/28/2022]
|
226
|
Tajbakhsh N, Shin JY, Gotway MB, Liang J. Computer-aided detection and visualization of pulmonary embolism using a novel, compact, and discriminative image representation. Med Image Anal 2019; 58:101541. [PMID: 31416007 DOI: 10.1016/j.media.2019.101541] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 07/31/2019] [Accepted: 08/01/2019] [Indexed: 01/15/2023]
Abstract
Diagnosing pulmonary embolism (PE) and excluding disorders that may clinically and radiologically simulate PE poses a challenging task for both human and machine perception. In this paper, we propose a novel vessel-oriented image representation (VOIR) that can improve the machine perception of PE through a consistent, compact, and discriminative image representation, and can also improve radiologists' diagnostic capabilities for PE assessment by serving as the backbone of an effective PE visualization system. Specifically, our image representation can be used to train more effective convolutional neural networks for distinguishing PE from PE mimics, and also allows radiologists to inspect the vessel lumen from multiple perspectives, so that they can report filling defects (PE), if any, with confidence. Our image representation offers four advantages: (1) Efficiency and compactness-concisely summarizing the 3D contextual information around an embolus in only three image channels, (2) consistency-automatically aligning the embolus in the 3-channel images according to the orientation of the affected vessel, (3) expandability-naturally supporting data augmentation for training CNNs, and (4) multi-view visualization-maximally revealing filling defects. To evaluate the effectiveness of VOIR for PE diagnosis, we use 121 CTPA datasets with a total of 326 emboli. We first compare VOIR with two other compact alternatives using six CNN architectures of varying depths and under varying amounts of labeled training data. Our experiments demonstrate that VOIR enables faster training of a higher-performing model compared to the other compact representations, even in the absence of deep architectures and large labeled training sets. Our experiments comparing VOIR with the 3D image representation further demonstrate that the 2D CNN trained with VOIR achieves a significant performance gain over the 3D CNNs. Our robustness analyses also show that the suggested PE CAD is robust to the choice of CT scanner machines and the physical size of crops used for training. Finally, our PE CAD is ranked second at the PE challenge in the category of 0 mm localization error.
Collapse
Affiliation(s)
- Nima Tajbakhsh
- Department of Biomedical Informatics, Arizona State University, Scottsdale, AZ, USA
| | - Jae Y Shin
- Department of Biomedical Informatics, Arizona State University, Scottsdale, AZ, USA
| | | | - Jianming Liang
- Department of Biomedical Informatics, Arizona State University, Scottsdale, AZ, USA.
| |
Collapse
|
227
|
Unsupervised segmentation of colonic polyps in narrow-band imaging data based on manifold representation of images and Wasserstein distance. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.101577] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
228
|
|
229
|
Poorneshwaran JM, Santhosh Kumar S, Ram K, Joseph J, Sivaprakasam M. Polyp Segmentation using Generative Adversarial Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2019:7201-7204. [PMID: 31947496 DOI: 10.1109/embc.2019.8857958] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Colorectal cancer is one of the highest causes of cancer-related death and the patient's survival rate depends on the stage at which polyps are detected. Polyp segmentation is a challenging research task due to variations in the size and shape of polyps leading to necessitate robust approaches for diagnosis. This paper studies the deep generative convolutional framework for the task of polyp segmentation. Here, the analysis of polyp segmentation has been explored with the pix2pix conditional generative adversarial network. On CVC- Clinic dataset, the proposed network achieves Jaccard index of 81.27% and Dice index of 88.48%.
Collapse
|
230
|
Vinsard DG, Mori Y, Misawa M, Kudo SE, Rastogi A, Bagci U, Rex DK, Wallace MB. Quality assurance of computer-aided detection and diagnosis in colonoscopy. Gastrointest Endosc 2019; 90:55-63. [PMID: 30926431 DOI: 10.1016/j.gie.2019.03.019] [Citation(s) in RCA: 92] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/08/2019] [Accepted: 03/18/2019] [Indexed: 02/05/2023]
Abstract
Recent breakthroughs in artificial intelligence (AI), specifically via its emerging sub-field "deep learning," have direct implications for computer-aided detection and diagnosis (CADe and/or CADx) for colonoscopy. AI is expected to have at least 2 major roles in colonoscopy practice-polyp detection (CADe) and polyp characterization (CADx). CADe has the potential to decrease the polyp miss rate, contributing to improving adenoma detection, whereas CADx can improve the accuracy of colorectal polyp optical diagnosis, leading to reduction of unnecessary polypectomy of non-neoplastic lesions, potential implementation of a resect-and-discard paradigm, and proper application of advanced resection techniques. A growing number of medical-engineering researchers are developing both CADe and CADx systems, some of which allow real-time recognition of polyps or in vivo identification of adenomas, with over 90% accuracy. However, the quality of the developed AI systems as well as that of the study designs vary significantly, hence raising some concerns regarding the generalization of the proposed AI systems. Initial studies were conducted in an exploratory or retrospective fashion by using stored images and likely overestimating the results. These drawbacks potentially hinder smooth implementation of this novel technology into colonoscopy practice. The aim of this article is to review both contributions and limitations in recent machine-learning-based CADe and/or CADx colonoscopy studies and propose some principles that should underlie system development and clinical testing.
Collapse
Affiliation(s)
- Daniela Guerrero Vinsard
- Showa University International Center for Endoscopy, Showa University Northern Yokohama Hospital, Yokohama, Japan; Division of Internal Medicine, University of Connecticut Health Center, Farmington, Connecticut, USA
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Amit Rastogi
- Division of Gastroenterology, University of Kansas Medical Center, Kansas City, Kansas
| | - Ulas Bagci
- Center for Research in Computer Vision, University of Central Florida, Orlando, Florida
| | - Douglas K Rex
- Division of Gastroenterology and Hepatology, Indiana University School of Medicine, Indianapolis, Indiana
| | - Michael B Wallace
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, Florida, USA
| |
Collapse
|
231
|
Kudo SE, Mori Y, Misawa M, Takeda K, Kudo T, Itoh H, Oda M, Mori K. Artificial intelligence and colonoscopy: Current status and future perspectives. Dig Endosc 2019; 31:363-371. [PMID: 30624835 DOI: 10.1111/den.13340] [Citation(s) in RCA: 79] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 12/04/2018] [Indexed: 02/06/2023]
Abstract
BACKGROUND AND AIM Application of artificial intelligence in medicine is now attracting substantial attention. In the field of gastrointestinal endoscopy, computer-aided diagnosis (CAD) for colonoscopy is the most investigated area, although it is still in the preclinical phase. Because colonoscopy is carried out by humans, it is inherently an imperfect procedure. CAD assistance is expected to improve its quality regarding automated polyp detection and characterization (i.e. predicting the polyp's pathology). It could help prevent endoscopists from missing polyps as well as provide a precise optical diagnosis for those detected. Ultimately, these functions that CAD provides could produce a higher adenoma detection rate and reduce the cost of polypectomy for hyperplastic polyps. METHODS AND RESULTS Currently, research on automated polyp detection has been limited to experimental assessments using an algorithm based on ex vivo videos or static images. Performance for clinical use was reported to have >90% sensitivity with acceptable specificity. In contrast, research on automated polyp characterization seems to surpass that for polyp detection. Prospective studies of in vivo use of artificial intelligence technologies have been reported by several groups, some of which showed a >90% negative predictive value for differentiating diminutive (≤5 mm) rectosigmoid adenomas, which exceeded the threshold for optical biopsy. CONCLUSION We introduce the potential of using CAD for colonoscopy and describe the most recent conditions for regulatory approval for artificial intelligence-assisted medical devices.
Collapse
Affiliation(s)
- Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Kenichi Takeda
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Toyoki Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Kanagawa, Japan
| | - Hayato Itoh
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Aichi, Japan
| |
Collapse
|
232
|
Viscaino M, Cheein FA. Machine learning for computer-aided polyp detection using wavelets and content-based image. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2019:961-965. [PMID: 31946053 DOI: 10.1109/embc.2019.8857831] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The continuous growing of machine learning techniques, their capabilities improvements and the availability of data being continuously collected, recorded and updated, can enhance diagnosis stages by making it faster and more accurate than human diagnosis. In lower endoscopies procedures, most of the diagnosis relies on the capabilities and expertise of the physician. During medical training, physicians can be benefited from the assistance of algorithms able to automatically detect polyps, thus enhancing their diagnosis. In this paper, we propose a machine learning approach trained to detect polyps in lower endoscopies recordings with high accuracy and sensitivity, previously processed using wavelet transform for feature extraction. The propose system is validated using available datasets. From a set of 1132 images, our system showed a 97.9% of accuracy in diagnosing polyps, around 10% more efficient than other approaches using techniques with a low computational requirement previously published. In addition, the false positive rate was 0.03. This encouraging result can be also extended to other diagnosis.
Collapse
|
233
|
Region-Based Automated Localization of Colonoscopy and Wireless Capsule Endoscopy Polyps. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9122404] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
The early detection of polyps could help prevent colorectal cancer. The automated detection of polyps on the colon walls could reduce the number of false negatives that occur due to manual examination errors or polyps being hidden behind folds, and could also help doctors locate polyps from screening tests such as colonoscopy and wireless capsule endoscopy. Losing polyps may result in lesions evolving badly. In this paper, we propose a modified region-based convolutional neural network (R-CNN) by generating masks around polyps detected from still frames. The locations of the polyps in the image are marked, which assists the doctors examining the polyps. The features from the polyp images are extracted using pre-trained Resnet-50 and Resnet-101 models through feature extraction and fine-tuning techniques. Various publicly available polyp datasets are analyzed with various pertained weights. It is interesting to notice that fine-tuning with balloon data (polyp-like natural images) improved the polyp detection rate. The optimum CNN models on colonoscopy datasets including CVC-ColonDB, CVC-PolypHD, and ETIS-Larib produced values (F1 score, F2 score) of (90.73, 91.27), (80.65, 79.11), and (76.43, 78.70) respectively. The best model on the wireless capsule endoscopy dataset gave a performance of (96.67, 96.10). The experimental results indicate the better localization of polyps compared to recent traditional and deep learning methods.
Collapse
|
234
|
Rau A, Edwards PJE, Ahmad OF, Riordan P, Janatka M, Lovat LB, Stoyanov D. Implicit domain adaptation with conditional generative adversarial networks for depth prediction in endoscopy. Int J Comput Assist Radiol Surg 2019; 14:1167-1176. [PMID: 30989505 PMCID: PMC6570710 DOI: 10.1007/s11548-019-01962-w] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Accepted: 04/02/2019] [Indexed: 02/08/2023]
Abstract
PURPOSE Colorectal cancer is the third most common cancer worldwide, and early therapeutic treatment of precancerous tissue during colonoscopy is crucial for better prognosis and can be curative. Navigation within the colon and comprehensive inspection of the endoluminal tissue are key to successful colonoscopy but can vary with the skill and experience of the endoscopist. Computer-assisted interventions in colonoscopy can provide better support tools for mapping the colon to ensure complete examination and for automatically detecting abnormal tissue regions. METHODS We train the conditional generative adversarial network pix2pix, to transform monocular endoscopic images to depth, which can be a building block in a navigational pipeline or be used to measure the size of polyps during colonoscopy. To overcome the lack of labelled training data in endoscopy, we propose to use simulation environments and to additionally train the generator and discriminator of the model on unlabelled real video frames in order to adapt to real colonoscopy environments. RESULTS We report promising results on synthetic, phantom and real datasets and show that generative models outperform discriminative models when predicting depth from colonoscopy images, in terms of both accuracy and robustness towards changes in domains. CONCLUSIONS Training the discriminator and generator of the model on real images, we show that our model performs implicit domain adaptation, which is a key step towards bridging the gap between synthetic and real data. Importantly, we demonstrate the feasibility of training a single model to predict depth from both synthetic and real images without the need for explicit, unsupervised transformer networks mapping between the domains of synthetic and real data.
Collapse
Affiliation(s)
- Anita Rau
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK.
| | - P J Eddie Edwards
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
| | - Omer F Ahmad
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
| | | | - Mirek Janatka
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
| |
Collapse
|
235
|
Qadir HA, Balasingham I, Solhusvik J, Bergsland J, Aabakken L, Shin Y. Improving Automatic Polyp Detection Using CNN by Exploiting Temporal Dependency in Colonoscopy Video. IEEE J Biomed Health Inform 2019; 24:180-193. [PMID: 30946683 DOI: 10.1109/jbhi.2019.2907434] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Automatic polyp detection has been shown to be difficult due to various polyp-like structures in the colon and high interclass variations in polyp size, color, shape, and texture. An efficient method should not only have a high correct detection rate (high sensitivity) but also a low false detection rate (high precision and specificity). The state-of-the-art detection methods include convolutional neural networks (CNN). However, CNNs have shown to be vulnerable to small perturbations and noise; they sometimes miss the same polyp appearing in neighboring frames and produce a high number of false positives. We aim to tackle this problem and improve the overall performance of the CNN-based object detectors for polyp detection in colonoscopy videos. Our method consists of two stages: a region of interest (RoI) proposal by CNN-based object detector networks and a false positive (FP) reduction unit. The FP reduction unit exploits the temporal dependencies among image frames in video by integrating the bidirectional temporal information obtained by RoIs in a set of consecutive frames. This information is used to make the final decision. The experimental results show that the bidirectional temporal information has been helpful in estimating polyp positions and accurately predict the FPs. This provides an overall performance improvement in terms of sensitivity, precision, and specificity compared to conventional false positive learning method, and thus achieves the state-of-the-art results on the CVC-ClinicVideoDB video data set.
Collapse
|
236
|
Zhang X, Chen F, Yu T, An J, Huang Z, Liu J, Hu W, Wang L, Duan H, Si J. Real-time gastric polyp detection using convolutional neural networks. PLoS One 2019; 14:e0214133. [PMID: 30908513 PMCID: PMC6433439 DOI: 10.1371/journal.pone.0214133] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Accepted: 03/01/2019] [Indexed: 02/07/2023] Open
Abstract
Computer-aided polyp detection in gastric gastroscopy has been the subject of research over the past few decades. However, despite significant advances, automatic polyp detection in real time is still an unsolved problem. In this paper, we report on a convolutional neural network (CNN) for polyp detection that is constructed based on Single Shot MultiBox Detector (SSD) architecture and which we call SSD for Gastric Polyps (SSD-GPNet). To take full advantages of feature maps' information from the feature pyramid and to acquire higher accuracy, we re-use information that is abandoned by Max-Pooling layers. In other words, we reuse the lost data from the pooling layers and concatenate that data as extra feature maps to contribute to classification and detection. Meanwhile, in the feature pyramid, we concatenate feature maps of the lower layers and feature maps that are deconvolved from upper layers to make explicit relationships between layers and to effectively increase the number of channels. The results show that our enhanced SSD for gastric polyp detection can realize real-time polyp detection with 50 frames per second (FPS) and can improve the mean average precision (mAP) from 88.5% to 90.4%, with only a little loss in time-performance. And the further experiment shows that SSD-GPNet has excellent performance in improving polyp detection recalls over 10% (p = 0.00053), especially in small polyp detection. This can help endoscopic physicians more easily find missed polyps and decrease the gastric polyp miss rate. It may be applicable in daily clinical practice to reduce the burden on physicians.
Collapse
Affiliation(s)
- Xu Zhang
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Fei Chen
- Institute of Gastroenterology, Zhejiang University, Hangzhou, China
- Department of Gastroenterology, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Tao Yu
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Jiye An
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Zhengxing Huang
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Jiquan Liu
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Weiling Hu
- Institute of Gastroenterology, Zhejiang University, Hangzhou, China
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Medical School, Zhejiang University, Hangzhou, China
| | - Liangjing Wang
- Institute of Gastroenterology, Zhejiang University, Hangzhou, China
- Department of Gastroenterology, Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Huilong Duan
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, China
| | - Jianmin Si
- Institute of Gastroenterology, Zhejiang University, Hangzhou, China
- Department of Gastroenterology, Sir Run Run Shaw Hospital, Medical School, Zhejiang University, Hangzhou, China
| |
Collapse
|
237
|
de Lange T, Halvorsen P, Riegler M. Methodology to develop machine learning algorithms to improve performance in gastrointestinal endoscopy. World J Gastroenterol 2018; 24:5057-5062. [PMID: 30568383 PMCID: PMC6288655 DOI: 10.3748/wjg.v24.i45.5057] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 10/25/2018] [Accepted: 11/02/2018] [Indexed: 02/06/2023] Open
Abstract
Assisted diagnosis using artificial intelligence has been a holy grail in medical research for many years, and recent developments in computer hardware have enabled the narrower area of machine learning to equip clinicians with potentially useful tools for computer assisted diagnosis (CAD) systems. However, training and assessing a computer's ability to diagnose like a human are complex tasks, and successful outcomes depend on various factors. We have focused our work on gastrointestinal (GI) endoscopy because it is a cornerstone for diagnosis and treatment of diseases of the GI tract. About 2.8 million luminal GI (esophageal, stomach, colorectal) cancers are detected globally every year, and although substantial technical improvements in endoscopes have been made over the last 10-15 years, a major limitation of endoscopic examinations remains operator variation. This translates into a substantial inter-observer variation in the detection and assessment of mucosal lesions, causing among other things an average polyp miss-rate of 20% in the colon and thus the subsequent development of a number of post-colonoscopy colorectal cancers. CAD systems might eliminate this variation and lead to more accurate diagnoses. In this editorial, we point out some of the current challenges in the development of efficient computer-based digital assistants. We give examples of proposed tools using various techniques, identify current challenges, and give suggestions for the development and assessment of future CAD systems.
Collapse
Affiliation(s)
- Thomas de Lange
- Department of Transplantation, Oslo University Hospital, Oslo 0424, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo 0316, Norway
| | - Pål Halvorsen
- Center for Digital Engineering Simula Metropolitan, Fornebu 1364, Norway
- Department for Informatics, University of Oslo, Oslo 0316, Norway
| | - Michael Riegler
- Center for Digital Engineering Simula Metropolitan, Fornebu 1364, Norway
- Department for Informatics, University of Oslo, Oslo 0316, Norway
| |
Collapse
|
238
|
Ahmad OF, Soares AS, Mazomenos E, Brandao P, Vega R, Seward E, Stoyanov D, Chand M, Lovat LB. Artificial intelligence and computer-aided diagnosis in colonoscopy: current evidence and future directions. Lancet Gastroenterol Hepatol 2018; 4:71-80. [PMID: 30527583 DOI: 10.1016/s2468-1253(18)30282-6] [Citation(s) in RCA: 122] [Impact Index Per Article: 17.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Revised: 08/10/2018] [Accepted: 08/20/2018] [Indexed: 12/15/2022]
Abstract
Computer-aided diagnosis offers a promising solution to reduce variation in colonoscopy performance. Pooled miss rates for polyps are as high as 22%, and associated interval colorectal cancers after colonoscopy are of concern. Optical biopsy, whereby in-vivo classification of polyps based on enhanced imaging replaces histopathology, has not been incorporated into routine practice because it is limited by interobserver variability and generally only meets accepted standards in expert settings. Real-time decision-support software has been developed to detect and characterise polyps, and also to offer feedback on the technical quality of inspection. Some of the current algorithms, particularly with recent advances in artificial intelligence techniques, match human expert performance for optical biopsy. In this Review, we summarise the evidence for clinical applications of computer-aided diagnosis and artificial intelligence in colonoscopy.
Collapse
Affiliation(s)
- Omer F Ahmad
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK; Gastrointestinal Services, University College London Hospital, London, UK.
| | - Antonio S Soares
- Division of Surgery & Interventional Science, University College London, London, UK
| | - Evangelos Mazomenos
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK
| | - Patrick Brandao
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK
| | - Roser Vega
- Gastrointestinal Services, University College London Hospital, London, UK
| | - Edward Seward
- Gastrointestinal Services, University College London Hospital, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK
| | - Manish Chand
- Division of Surgery & Interventional Science, University College London, London, UK; Gastrointestinal Services, University College London Hospital, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK; Division of Surgery & Interventional Science, University College London, London, UK; Gastrointestinal Services, University College London Hospital, London, UK
| |
Collapse
|
239
|
Mahmood F, Chen R, Durr NJ. Unsupervised Reverse Domain Adaptation for Synthetic Medical Images via Adversarial Training. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2572-2581. [PMID: 29993538 DOI: 10.1109/tmi.2018.2842767] [Citation(s) in RCA: 107] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
To realize the full potential of deep learning for medical imaging, large annotated datasets are required for training. Such datasets are difficult to acquire due to privacy issues, lack of experts available for annotation, underrepresentation of rare conditions, and poor standardization. The lack of annotated data has been addressed in conventional vision applications using synthetic images refined via unsupervised adversarial training to look like real images. However, this approach is difficult to extend to general medical imaging because of the complex and diverse set of features found in real human tissues. We propose a novel framework that uses a reverse flow, where adversarial training is used to make real medical images more like synthetic images, and clinically-relevant features are preserved via self-regularization. These domain-adapted synthetic-like images can then be accurately interpreted by networks trained on large datasets of synthetic medical images. We implement this approach on the notoriously difficult task of depth-estimation from monocular endoscopy which has a variety of applications in colonoscopy, robotic surgery, and invasive endoscopic procedures. We train a depth estimator on a large data set of synthetic images generated using an accurate forward model of an endoscope and an anatomically-realistic colon. Our analysis demonstrates that the structural similarity of endoscopy depth estimation in a real pig colon predicted from a network trained solely on synthetic data improved by 78.7% by using reverse domain adaptation.
Collapse
|
240
|
Zhang R, Zheng Y, Poon CC, Shen D, Lau JY. Polyp detection during colonoscopy using a regression-based convolutional neural network with a tracker. PATTERN RECOGNITION 2018; 83:209-219. [PMID: 31105338 PMCID: PMC6519928 DOI: 10.1016/j.patcog.2018.05.026] [Citation(s) in RCA: 63] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
A computer-aided detection (CAD) tool for locating and detecting polyps can help reduce the chance of missing polyps during colonoscopy. Nevertheless, state-of-the-art algorithms were either computationally complex or suffered from low sensitivity and therefore unsuitable to be used in real clinical setting. In this paper, a novel regression-based Convolutional Neural Network (CNN) pipeline is presented for polyp detection during colonoscopy. The proposed pipeline was constructed in two parts: 1) to learn the spatial features of colorectal polyps, a fast object detection algorithm named ResYOLO was pre-trained with a large non-medical image database and further fine-tuned with colonoscopic images extracted from videos; and 2) temporal information was incorporated via a tracker named Efficient Convolution Operators (ECO) for refining the detection results given by ResYOLO. Evaluated on 17,574 frames extracted from 18 endoscopic videos of the AsuMayoDB, the proposed method was able to detect frames with polyps with a precision of 88.6%, recall of 71.6% and processing speed of 6.5 frames per second, i.e. the method can accurately locate polyps in more frames and at a faster speed compared to existing methods. In conclusion, the proposed method has great potential to be used to assist endoscopists in tracking polyps during colonoscopy.
Collapse
Affiliation(s)
- Ruikai Zhang
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
| | - Yali Zheng
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
| | - Carmen C.Y. Poon
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
- Corresponding author
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Corresponding author at: Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA.
| | - James Y.W. Lau
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
241
|
Alagappan M, Brown JRG, Mori Y, Berzin TM. Artificial intelligence in gastrointestinal endoscopy: The future is almost here. World J Gastrointest Endosc 2018; 10:239-249. [PMID: 30364792 PMCID: PMC6198310 DOI: 10.4253/wjge.v10.i10.239] [Citation(s) in RCA: 101] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/09/2018] [Revised: 06/09/2018] [Accepted: 06/30/2018] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) enables machines to provide unparalleled value in a myriad of industries and applications. In recent years, researchers have harnessed artificial intelligence to analyze large-volume, unstructured medical data and perform clinical tasks, such as the identification of diabetic retinopathy or the diagnosis of cutaneous malignancies. Applications of artificial intelligence techniques, specifically machine learning and more recently deep learning, are beginning to emerge in gastrointestinal endoscopy. The most promising of these efforts have been in computer-aided detection and computer-aided diagnosis of colorectal polyps, with recent systems demonstrating high sensitivity and accuracy even when compared to expert human endoscopists. AI has also been utilized to identify gastrointestinal bleeding, to detect areas of inflammation, and even to diagnose certain gastrointestinal infections. Future work in the field should concentrate on creating seamless integration of AI systems with current endoscopy platforms and electronic medical records, developing training modules to teach clinicians how to use AI tools, and determining the best means for regulation and approval of new AI technology.
Collapse
Affiliation(s)
- Muthuraman Alagappan
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center, Harvard Medical, Boston, MA 02215, United States
| | - Jeremy R Glissen Brown
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center, Harvard Medical, Boston, MA 02215, United States
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Tyler M Berzin
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center, Harvard Medical, Boston, MA 02215, United States
| |
Collapse
|
242
|
Shin Y, Balasingham I. Automatic polyp frame screening using patch based combined feature and dictionary learning. Comput Med Imaging Graph 2018; 69:33-42. [PMID: 30172091 DOI: 10.1016/j.compmedimag.2018.08.001] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Revised: 07/29/2018] [Accepted: 08/13/2018] [Indexed: 12/15/2022]
Abstract
Polyps in the colon can potentially become malignant cancer tissues where early detection and removal lead to high survival rate. Certain types of polyps can be difficult to detect even for highly trained physicians. Inspired by aforementioned problem our study aims to improve the human detection performance by developing an automatic polyp screening framework as a decision support tool. We use a small image patch based combined feature method. Features include shape and color information and are extracted using histogram of oriented gradient and hue histogram methods. Dictionary learning based training is used to learn features and final feature vector is formed using sparse coding. For classification, we use patch image classification based on linear support vector machine and whole image thresholding. The proposed framework is evaluated using three public polyp databases. Our experimental results show that the proposed scheme successfully classified polyps and normal images with over 95% of classification accuracy, sensitivity, specificity and precision. In addition, we compare performance of the proposed scheme with conventional feature based methods and the convolutional neural network (CNN) based deep learning approach which is the state of the art technique in many image classification applications.
Collapse
Affiliation(s)
- Younghak Shin
- Department Electronic Systems at Norwegian University of Science and Technology (NTNU), Trondheim, Norway.
| | - Ilangko Balasingham
- Intervention Centre, Oslo University Hospital, Oslo NO-0027, Norway; Institute of Clinical Medicine, University of Oslo, and the Norwegian University of Science and Technology (NTNU), Norway.
| |
Collapse
|
243
|
Zheng Y, Zhang R, Yu R, Jiang Y, Mak TWC, Wong SH, Lau JYW, Poon CCY. Localisation of Colorectal Polyps by Convolutional Neural Network Features Learnt from White Light and Narrow Band Endoscopic Images of Multiple Databases. 2018 40TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC) 2018; 2018:4142-4145. [PMID: 30441267 DOI: 10.1109/embc.2018.8513337] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
244
|
Akbari M, Mohrekesh M, Rafiei S, Reza Soroushmehr SM, Karimi N, Samavi S, Najarian K. Classification of Informative Frames in Colonoscopy Videos Using Convolutional Neural Networks with Binarized Weights. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:65-68. [PMID: 30440342 DOI: 10.1109/embc.2018.8512226] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Colorectal cancer is one of the common cancers in the United States. Polyps are one of the major causes of colonic cancer, and early detection of polyps will increase the chance of cancer treatments. In this paper, we propose a novel classification of informative frames based on a convolutional neural network with binarized weights. The proposed CNN is trained with colonoscopy frames along with the labels of the frames as input data. We also used binarized weights and kernels to reduce the size of CNN and make it suitable for implementation in medical hardware. We evaluate our proposed method using Asu Mayo Test clinic database, which contains colonoscopy videos of different patients. Our proposed method reaches a dice score of 71.20% and accuracy of more than 90% using the mentioned dataset.
Collapse
|
245
|
Yuan Y, Li D, Meng MQH. Automatic Polyp Detection via a Novel Unified Bottom-Up and Top-Down Saliency Approach. IEEE J Biomed Health Inform 2018; 22:1250-1260. [DOI: 10.1109/jbhi.2017.2734329] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
246
|
Zhao R, Zhang R, Tang T, Feng X, Li J, Liu Y, Zhu R, Wang G, Li K, Zhou W, Yang Y, Wang Y, Ba Y, Zhang J, Liu Y, Zhou F. TriZ-a rotation-tolerant image feature and its application in endoscope-based disease diagnosis. Comput Biol Med 2018; 99:182-190. [PMID: 29936284 DOI: 10.1016/j.compbiomed.2018.06.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Revised: 05/30/2018] [Accepted: 06/08/2018] [Indexed: 12/11/2022]
Abstract
Endoscopy is becoming one of the widely-used technologies to screen the gastric diseases, and it heavily relies on the experiences of the clinical endoscopists. The location, shape, and size are the typical patterns for the endoscopists to make the diagnosis decisions. The contrasting texture patterns also suggest the potential lesions. This study designed a novel rotation-tolerant image feature, TriZ, and demonstrated the effectiveness on both the rotation invariance and the lesion detection of three gastric lesion types, i.e., gastric polyp, gastric ulcer, and gastritis. TriZ achieved 87.0% in the four-class classification problem of the three gastric lesion types and the healthy controls, averaged over the twenty random runs of 10-fold cross-validations. Due to that biomedical imaging technologies may capture the lesion sites from different angles, the symmetric image feature extraction algorithm TriZ may facilitate the biomedical image based disease diagnosis modeling. Compared with the 378,434 features of the HOG algorithm, TriZ achieved a better accuracy using only 126 image features.
Collapse
Affiliation(s)
- Ruixue Zhao
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Ruochi Zhang
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Tongyu Tang
- First Hospital, Jilin University, Changchun, Jilin, 130012, China
| | - Xin Feng
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Jialiang Li
- College of Software, Jilin University, Changchun, Jilin, 130012, China
| | - Yue Liu
- College of Communication Engineering, Jilin University, Changchun, Jilin, 130012, China
| | - Renxiang Zhu
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Guangze Wang
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Kangning Li
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Wenyang Zhou
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Yunfei Yang
- College of Software, Jilin University, Changchun, Jilin, 130012, China
| | - Yuzhao Wang
- College of Software, Jilin University, Changchun, Jilin, 130012, China
| | - Yuanjie Ba
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Jiaojiao Zhang
- College of Software, Jilin University, Changchun, Jilin, 130012, China
| | - Yang Liu
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China
| | - Fengfeng Zhou
- College of Computer Science and Technology, Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, Jilin, 130012, China.
| |
Collapse
|
247
|
Deep learning and conditional random fields-based depth estimation and topographical reconstruction from conventional endoscopy. Med Image Anal 2018; 48:230-243. [PMID: 29990688 DOI: 10.1016/j.media.2018.06.005] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2018] [Revised: 05/04/2018] [Accepted: 06/07/2018] [Indexed: 02/07/2023]
Abstract
Colorectal cancer is the fourth leading cause of cancer deaths worldwide and the second leading cause in the United States. The risk of colorectal cancer can be mitigated by the identification and removal of premalignant lesions through optical colonoscopy. Unfortunately, conventional colonoscopy misses more than 20% of the polyps that should be removed, due in part to poor contrast of lesion topography. Imaging depth and tissue topography during a colonoscopy is difficult because of the size constraints of the endoscope and the deforming mucosa. Most existing methods make unrealistic assumptions which limits accuracy and sensitivity. In this paper, we present a method that avoids these restrictions, using a joint deep convolutional neural network-conditional random field (CNN-CRF) framework for monocular endoscopy depth estimation. Estimated depth is used to reconstruct the topography of the surface of the colon from a single image. We train the unary and pairwise potential functions of a CRF in a CNN on synthetic data, generated by developing an endoscope camera model and rendering over 200,000 images of an anatomically-realistic colon.We validate our approach with real endoscopy images from a porcine colon, transferred to a synthetic-like domain via adversarial training, with ground truth from registered computed tomography measurements. The CNN-CRF approach estimates depths with a relative error of 0.152 for synthetic endoscopy images and 0.242 for real endoscopy images. We show that the estimated depth maps can be used for reconstructing the topography of the mucosa from conventional colonoscopy images. This approach can easily be integrated into existing endoscopy systems and provides a foundation for improving computer-aided detection algorithms for detection, segmentation and classification of lesions.
Collapse
|
248
|
A novel summary report of colonoscopy: timeline visualization providing meaningful colonoscopy video information. Int J Colorectal Dis 2018. [PMID: 29520455 DOI: 10.1007/s00384-018-2980-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
PURPOSE The colonoscopy adenoma detection rate depends largely on physician experience and skill, and overlooked colorectal adenomas could develop into cancer. This study assessed a system that detects polyps and summarizes meaningful information from colonoscopy videos. METHODS One hundred thirteen consecutive patients had colonoscopy videos prospectively recorded at the Seoul National University Hospital. Informative video frames were extracted using a MATLAB support vector machine (SVM) model and classified as bleeding, polypectomy, tool, residue, thin wrinkle, folded wrinkle, or common. Thin wrinkle, folded wrinkle, and common frames were reanalyzed using SVM for polyp detection. The SVM model was applied hierarchically for effective classification and optimization of the SVM. RESULTS The mean classification accuracy according to type was over 93%; sensitivity was over 87%. The mean sensitivity for polyp detection was 82.1%, and the positive predicted value (PPV) was 39.3%. Polyps detected using the system were larger (6.3 ± 6.4 vs. 4.9 ± 2.5 mm; P = 0.003) with a more pedunculated morphology (Yamada type III, 10.2 vs. 0%; P < 0.001; Yamada type IV, 2.8 vs. 0%; P < 0.001) than polyps missed by the system. There were no statistically significant differences in polyp distribution or histology between the groups. Informative frames and suspected polyps were presented on a timeline. This summary was evaluated using the system usability scale questionnaire; 89.3% of participants expressed positive opinions. CONCLUSIONS We developed and verified a system to extract meaningful information from colonoscopy videos. Although further improvement and validation of the system is needed, the proposed system is useful for physicians and patients.
Collapse
|
249
|
van der Sommen F, Curvers WL, Nagengast WB. Novel Developments in Endoscopic Mucosal Imaging. Gastroenterology 2018; 154:1876-1886. [PMID: 29462601 DOI: 10.1053/j.gastro.2018.01.070] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Revised: 12/28/2017] [Accepted: 01/06/2018] [Indexed: 12/20/2022]
Abstract
Endoscopic techniques such as high-definition and optical-chromoendoscopy have had enormous impact on endoscopy practice. Since these techniques allow assessment of most subtle morphological mucosal abnormalities, further improvements in endoscopic practice lay in increasing the detection efficacy of endoscopists. Several new developments could assist in this. First, web based training tools could improve the skills of the endoscopist for enhancing the detection and classification of lesions. Secondly, incorporation of computer aided detection will be the next step to raise endoscopic quality of the captured data. These systems will aid the endoscopist in interpreting the increasing amount of visual information in endoscopic images providing real-time objective second reading. In addition, developments in the field of molecular imaging open opportunities to add functional imaging data, visualizing biological parameters, of the gastrointestinal tract to white-light morphology imaging. For the successful implementation of abovementioned techniques, a true multi-disciplinary approach is of vital importance.
Collapse
Affiliation(s)
- Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Wouter L Curvers
- Department of Gastroenterology and Hepatology, Catharina Hospital, Eindhoven, The Netherlands
| | - Wouter B Nagengast
- Department of Gastroenterology and Hepatology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|
250
|
Brandao P, Zisimopoulos O, Mazomenos E, Ciuti G, Bernal J, Visentini-Scarzanella M, Menciassi A, Dario P, Koulaouzidis A, Arezzo A, Hawkes DJ, Stoyanov D. Towards a Computed-Aided Diagnosis System in Colonoscopy: Automatic Polyp Segmentation Using Convolution Neural Networks. ACTA ACUST UNITED AC 2018. [DOI: 10.1142/s2424905x18400020] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Early diagnosis is essential for the successful treatment of bowel cancers including colorectal cancer (CRC), and capsule endoscopic imaging with robotic actuation can be a valuable diagnostic tool when combined with automated image analysis. We present a deep learning rooted detection and segmentation framework for recognizing lesions in colonoscopy and capsule endoscopy images. We restructure established convolution architectures, such as VGG and ResNets, by converting them into fully-connected convolution networks (FCNs), fine-tune them and study their capabilities for polyp segmentation and detection. We additionally use shape-from-shading (SfS) to recover depth and provide a richer representation of the tissue’s structure in colonoscopy images. Depth is incorporated into our network models as an additional input channel to the RGB information and we demonstrate that the resulting network yields improved performance. Our networks are tested on publicly available datasets and the most accurate segmentation model achieved a mean segmentation interception over union (IU) of 47.78% and 56.95% on the ETIS-Larib and CVC-Colon datasets, respectively. For polyp detection, the top performing models we propose surpass the current state-of-the-art with detection recalls superior to 90% for all datasets tested. To our knowledge, we present the first work to use FCNs for polyp segmentation in addition to proposing a novel combination of SfS and RGB that boosts performance.
Collapse
Affiliation(s)
- Patrick Brandao
- Centre for Medical Image Computing, University College London, London, UK
| | | | | | - Gastone Ciuti
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
| | - Jorge Bernal
- Department of Computer Science Universitat Autnoma de Barcelona, Barcelona, Spain
| | | | | | - Paolo Dario
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
| | | | - Alberto Arezzo
- Department of Surgical Sciences, University of Turin, Turin, Italy
| | - David J Hawkes
- Centre for Medical Image Computing, University College London, London, UK
| | - Danail Stoyanov
- Centre for Medical Image Computing, University College London, London, UK
| |
Collapse
|