1
|
Ren X, Zhou W, Yuan N, Li F, Ruan Y, Zhou H. Prompt-based polyp segmentation during endoscopy. Med Image Anal 2025; 102:103510. [PMID: 40073580 DOI: 10.1016/j.media.2025.103510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Revised: 12/26/2024] [Accepted: 02/15/2025] [Indexed: 03/14/2025]
Abstract
Accurate judgment and identification of polyp size is crucial in endoscopic diagnosis. However, the indistinct boundaries of polyps lead to missegmentation and missed cancer diagnoses. In this paper, a prompt-based polyp segmentation method (PPSM) is proposed to assist in early-stage cancer diagnosis during endoscopy. It combines endoscopists' experience and artificial intelligence technology. Firstly, a prompt-based polyp segmentation network (PPSN) is presented, which contains the prompt encoding module (PEM), the feature extraction encoding module (FEEM), and the mask decoding module (MDM). The PEM encodes prompts to guide the FEEM for feature extracting and the MDM for mask generating. So that PPSN can segment polyps efficiently. Secondly, endoscopists' ocular attention data (gazes) are used as prompts, which can enhance PPSN's accuracy for segmenting polyps and obtain prompt data effectively in real-world. To reinforce the PPSN's stability, non-uniform dot matrix prompts are generated to compensate for frame loss during the eye-tracking. Moreover, a data augmentation method based on the segment anything model (SAM) is introduced to enrich the prompt dataset and improve the PPSN's adaptability. Experiments demonstrate the PPSM's accuracy and real-time capability. The results from cross-training and cross-testing on four datasets show the PPSM's generalization. Based on the research results, a disposable electronic endoscope with the real-time auxiliary diagnosis function for early cancer and an image processor have been developed. Part of the code and the method for generating the prompts dataset are available at https://github.com/XinZhenRen/PPSM.
Collapse
Affiliation(s)
- Xinzhen Ren
- Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, CO 200444, China
| | - Wenju Zhou
- Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, CO 200444, China.
| | - Naitong Yuan
- Shanghai Key Laboratory of Power Station Automation Technology, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, CO 200444, China
| | - Fang Li
- Department of Obstetrics and Gynecology, Shanghai East Hospital, School of Medicine, Tongji University, Shanghai, CO 200120, China.
| | - Yetian Ruan
- Department of Obstetrics and Gynecology, Shanghai East Hospital, School of Medicine, Tongji University, Shanghai, CO 200120, China
| | - Huiyu Zhou
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| |
Collapse
|
2
|
Ran J, Zhou M, Wen H. Artificial intelligence in inflammatory bowel disease. Saudi J Gastroenterol 2025:00936815-990000000-00126. [PMID: 40275746 DOI: 10.4103/sjg.sjg_46_25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/05/2025] [Accepted: 03/28/2025] [Indexed: 04/26/2025] Open
Abstract
ABSTRACT Inflammatory bowel disease (IBD) is a complex condition influenced by various intestinal factors. Advances in next-generation sequencing, high-throughput omics, and molecular network technologies have significantly accelerated research in this field. The emergence of artificial intelligence (AI) has further enhanced the efficient utilization and interpretation of datasets, enabling the discovery of clinically actionable insights. AI is now extensively applied in gastroenterology, where it aids in endoscopic analyses, including the diagnosis of colorectal cancer, precancerous polyps, gastrointestinal inflammatory lesions, and bleeding. Additionally, AI supports clinicians in patient stratification, predicting disease progression and treatment responses, and adjusting treatment plans in a timely manner. This approach not only reduces healthcare costs but also improves patient health and safety. This review outlines the principles of AI, the current research landscape, and future directions for its applications in IBD, with the goal of advancing targeted treatment strategies.
Collapse
Affiliation(s)
- Jiaxuan Ran
- Department of Gastroenterology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan Province, China
| | | | | |
Collapse
|
3
|
Hassan C, Bisschops R, Sharma P, Mori Y. Colon Cancer Screening, Surveillance, and Treatment: Novel Artificial Intelligence Driving Strategies in the Management of Colon Lesions. Gastroenterology 2025:S0016-5085(25)00478-0. [PMID: 40054749 DOI: 10.1053/j.gastro.2025.02.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/19/2024] [Revised: 02/09/2025] [Accepted: 02/15/2025] [Indexed: 03/25/2025]
Abstract
Colonoscopy, a crucial procedure for detecting and removing colorectal polyps, has seen transformative advancements through the integration of artificial intelligence, specifically in computer-aided detection (CADe) and diagnosis (CADx). These tools enhance real-time detection and characterization of lesions, potentially reducing human error, and standardizing the quality of colonoscopy across endoscopists. CADe has proven effective in increasing adenoma detection rate, potentially reducing long-term colorectal cancer incidence. However, CADe's benefits are accompanied by challenges, such as potentially longer procedure times, increased non-neoplastic polyp resections, and a higher surveillance burden. CADx, although promising in differentiating neoplastic and non-neoplastic diminutive polyps, encounters limitations in accuracy, particularly in the proximal colon. Real-world data also revealed gaps between trial efficacy and practical outcomes, emphasizing the need for further research in uncontrolled settings. Moreover, CADx limited specificity and binary output underscore the necessity for explainable artificial intelligence to gain endoscopists' trust. This review aimed to explore the benefits, harms, and limitations of artificial intelligence for colon cancer screening, surveillance, and treatment focusing on CADe and CADx systems for lesion detection and characterization, respectively, while addressing challenges in integrating these technologies into clinical practice.
Collapse
Affiliation(s)
- Cesare Hassan
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Italy; Department of Gastroenterology, Istituto di Ricovero e Cura a Carattere Scientifico, Humanitas Research Hospital, Rozzano, Italy.
| | - Raf Bisschops
- Department of Gastroenterology and Hepatology, University Hospitals Leuven, Leuven, Belgium; Translational Research Center in Gastrointestinal Disorders, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Prateek Sharma
- Department of Gastroenterology and Hepatology, Kansas City Veterans Affairs Medical Center, Kansas City, Missouri
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan; Clinical Effectiveness Research Group, University of Oslo, Oslo, Norway; Department of Transplantation Medicine, Oslo University Hospital, Oslo, Norway
| |
Collapse
|
4
|
Elamin S, Johri S, Rajpurkar P, Geisler E, Berzin TM. From data to artificial intelligence: evaluating the readiness of gastrointestinal endoscopy datasets. J Can Assoc Gastroenterol 2025; 8:S81-S86. [PMID: 39990508 PMCID: PMC11842897 DOI: 10.1093/jcag/gwae041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/25/2025] Open
Abstract
The incorporation of artificial intelligence (AI) into gastrointestinal (GI) endoscopy represents a promising advancement in gastroenterology. With over 40 published randomized controlled trials and numerous ongoing clinical trials, gastroenterology leads other medical disciplines in AI research. Computer-aided detection algorithms for identifying colorectal polyps have achieved regulatory approval and are in routine clinical use, while other AI applications for GI endoscopy are in advanced development stages. Near-term opportunities include the potential for computer-aided diagnosis to replace conventional histopathology for diagnosing small colon polyps and increased AI automation in capsule endoscopy. Despite significant development in research settings, the generalizability and robustness of AI models in real clinical practice remain inconsistent. The GI field lags behind other medical disciplines in the breadth of novel AI algorithms, with only 13 out of 882 Food and Drug Administration (FDA)-approved AI models focussed on GI endoscopy as of June 2024. Additionally, existing GI endoscopy image databases are disproportionately focussed on colon polyps, lacking representation of the diversity of other endoscopic findings. High-quality datasets, encompassing a wide range of patient demographics, endoscopic equipment types, and disease states, are crucial for developing effective AI models for GI endoscopy. This article reviews the current state of GI endoscopy datasets, barriers to progress, including dataset size, data diversity, annotation quality, and ethical issues in data collection and usage, and future needs for advancing AI in GI endoscopy.
Collapse
Affiliation(s)
- Sami Elamin
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02115, USA
| | - Shreya Johri
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02115, USA
| | - Pranav Rajpurkar
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA 02115, USA
| | - Enrik Geisler
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA 02115, USA
| | - Tyler M Berzin
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
5
|
Nguyen-Tat TB, Vo HA, Dang PS. QMaxViT-Unet+: A query-based MaxViT-Unet with edge enhancement for scribble-supervised segmentation of medical images. Comput Biol Med 2025; 187:109762. [PMID: 39919665 DOI: 10.1016/j.compbiomed.2025.109762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2024] [Revised: 01/17/2025] [Accepted: 01/27/2025] [Indexed: 02/09/2025]
Abstract
The deployment of advanced deep learning models for medical image segmentation is often constrained by the requirement for extensively annotated datasets. Weakly-supervised learning, which allows less precise labels, has become a promising solution to this challenge. Building on this approach, we propose QMaxViT-Unet+, a novel framework for scribble-supervised medical image segmentation. This framework is built on the U-Net architecture, with the encoder and decoder replaced by Multi-Axis Vision Transformer (MaxViT) blocks. These blocks enhance the model's ability to learn local and global features efficiently. Additionally, our approach integrates a query-based Transformer decoder to refine features and an edge enhancement module to compensate for the limited boundary information in the scribble label. We evaluate the proposed QMaxViT-Unet+ on four public datasets focused on cardiac structures, colorectal polyps, and breast cancer: ACDC, MS-CMRSeg, SUN-SEG, and BUSI. Evaluation metrics include the Dice similarity coefficient (DSC) and the 95th percentile of Hausdorff distance (HD95). Experimental results show that QMaxViT-Unet+ achieves 89.1% DSC and 1.316 mm HD95 on ACDC, 88.4% DSC and 2.226 mm HD95 on MS-CMRSeg, 71.4% DSC and 4.996 mm HD95 on SUN-SEG, and 69.4% DSC and 50.122 mm HD95 on BUSI. These results demonstrate that our method outperforms existing approaches in terms of accuracy, robustness, and efficiency while remaining competitive with fully-supervised learning approaches. This makes it ideal for medical image analysis, where high-quality annotations are often scarce and require significant effort and expense. The code is available at https://github.com/anpc849/QMaxViT-Unet.
Collapse
Affiliation(s)
- Thien B Nguyen-Tat
- University of Information Technology, Ho Chi Minh City, Vietnam; Vietnam National University, Ho Chi Minh City, Vietnam.
| | - Hoang-An Vo
- University of Information Technology, Ho Chi Minh City, Vietnam; Vietnam National University, Ho Chi Minh City, Vietnam
| | - Phuoc-Sang Dang
- University of Information Technology, Ho Chi Minh City, Vietnam; Vietnam National University, Ho Chi Minh City, Vietnam
| |
Collapse
|
6
|
Fu J, Chen K, Dou Q, Gao Y, He Y, Zhou P, Lin S, Wang Y, Guo Y. IPNet: An Interpretable Network With Progressive Loss for Whole-Stage Colorectal Disease Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:789-800. [PMID: 39298304 DOI: 10.1109/tmi.2024.3459910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/21/2024]
Abstract
Colorectal cancer plays a dominant role in cancer-related deaths, primarily due to the absence of obvious early-stage symptoms. Whole-stage colorectal disease diagnosis is crucial for assessing lesion evolution and determining treatment plans. However, locality difference and disease progression lead to intra-class disparities and inter-class similarities for colorectal lesion representation. In addition, interpretable algorithms explaining the lesion progression are still lacking, making the prediction process a "black box". In this paper, we propose IPNet, a dual-branch interpretable network with progressive loss for whole-stage colorectal disease diagnosis. The dual-branch architecture captures unbiased features representing diverse localities to suppress intra-class variation. The progressive loss function considers inter-class relationship, using prior knowledge of disease evolution to guide classification. Furthermore, a novel Grain-CAM is designed to interpret IPNet by visualizing pixel-wise attention maps from shallow to deep layers, providing regions semantically related to IPNet's progressive classification. We conducted whole-stage diagnosis on two image modalities, i.e., colorectal lesion classification on 129,893 endoscopic optical images and rectal tumor T-staging on 11,072 endoscopic ultrasound images. IPNet is shown to surpass other state-of-the-art algorithms, accordingly achieving an accuracy of 93.15% and 89.62%. Especially, it establishes effective decision boundaries for challenges like polyp vs. adenoma and T2 vs. T3. The results demonstrate an explainable attempt for colorectal lesion classification at a whole-stage level, and rectal tumor T-staging by endoscopic ultrasound is also unprecedentedly explored. IPNet is expected to be further applied, assisting physicians in whole-stage disease diagnosis and enhancing diagnostic interpretability.
Collapse
|
7
|
Wang KN, Wang H, Zhou GQ, Wang Y, Yang L, Chen Y, Li S. TSdetector: Temporal-Spatial self-correction collaborative learning for colonoscopy video detection. Med Image Anal 2025; 100:103384. [PMID: 39579624 DOI: 10.1016/j.media.2024.103384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 09/24/2024] [Accepted: 10/28/2024] [Indexed: 11/25/2024]
Abstract
CNN-based object detection models that strike a balance between performance and speed have been gradually used in polyp detection tasks. Nevertheless, accurately locating polyps within complex colonoscopy video scenes remains challenging since existing methods ignore two key issues: intra-sequence distribution heterogeneity and precision-confidence discrepancy. To address these challenges, we propose a novel Temporal-Spatial self-correction detector (TSdetector), which first integrates temporal-level consistency learning and spatial-level reliability learning to detect objects continuously. Technically, we first propose a global temporal-aware convolution, assembling the preceding information to dynamically guide the current convolution kernel to focus on global features between sequences. In addition, we designed a hierarchical queue integration mechanism to combine multi-temporal features through a progressive accumulation manner, fully leveraging contextual consistency information together with retaining long-sequence-dependency features. Meanwhile, at the spatial level, we advance a position-aware clustering to explore the spatial relationships among candidate boxes for recalibrating prediction confidence adaptively, thus eliminating redundant bounding boxes efficiently. The experimental results on three publicly available polyp video dataset show that TSdetector achieves the highest polyp detection rate and outperforms other state-of-the-art methods. The code can be available at https://github.com/soleilssss/TSdetector.
Collapse
Affiliation(s)
- Kai-Ni Wang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Haolin Wang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Guang-Quan Zhou
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China.
| | | | - Ling Yang
- Institute of Medical Technology, Peking University Health Science Center, China
| | - Yang Chen
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China; Key Laboratory of Computer Network and Information Integration, Southeast University, Nanjing, China
| | - Shuo Li
- Department of Computer and Data Science and Department of Biomedical Engineering, Case Western Reserve University, USA
| |
Collapse
|
8
|
Rusinovich Y, Rusinovich V, Buhayenka A, Liashko V, Sabanov A, Holstein DJF, Aldmour S, Doss M, Branzan D. Classification of anatomic patterns of peripheral artery disease with automated machine learning (AutoML). Vascular 2025; 33:26-33. [PMID: 38404043 DOI: 10.1177/17085381241236571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
AIM The aim of this study was to investigate the potential of novel automated machine learning (AutoML) in vascular medicine by developing a discriminative artificial intelligence (AI) model for the classification of anatomical patterns of peripheral artery disease (PAD). MATERIAL AND METHODS Random open-source angiograms of lower limbs were collected using a web-indexed search. An experienced researcher in vascular medicine labelled the angiograms according to the most applicable grade of femoropopliteal disease in the Global Limb Anatomic Staging System (GLASS). An AutoML model was trained using the Vertex AI (Google Cloud) platform to classify the angiograms according to the GLASS grade with a multi-label algorithm. Following deployment, we conducted a test using 25 random angiograms (five from each GLASS grade). Model tuning through incremental training by introducing new angiograms was executed to the limit of the allocated quota following the initial evaluation to determine its effect on the software's performance. RESULTS We collected 323 angiograms to create the AutoML model. Among these, 80 angiograms were labelled as grade 0 of femoropopliteal disease in GLASS, 114 as grade 1, 34 as grade 2, 25 as grade 3 and 70 as grade 4. After 4.5 h of training, the AI model was deployed. The AI self-assessed average precision was 0.77 (0 is minimal and 1 is maximal). During the testing phase, the AI model successfully determined the GLASS grade in 100% of the cases. The agreement with the researcher was almost perfect with the number of observed agreements being 22 (88%), Kappa = 0.85 (95% CI 0.69-1.0). The best results were achieved in predicting GLASS grade 0 and grade 4 (initial precision: 0.76 and 0.84). However, the AI model exhibited poorer results in classifying GLASS grade 3 (initial precision: 0.2) compared to other grades. Disagreements between the AI and the researcher were associated with the low resolution of the test images. Incremental training expanded the initial dataset by 23% to a total of 417 images, which improved the model's average precision by 11% to 0.86. CONCLUSION After a brief training period with a limited dataset, AutoML has demonstrated its potential in identifying and classifying the anatomical patterns of PAD, operating unhindered by the factors that can affect human analysts, such as fatigue or lack of experience. This technology bears the potential to revolutionize outcome prediction and standardize evidence-based revascularization strategies for patients with PAD, leveraging its adaptability and ability to continuously improve with additional data. The pursuit of further research in AutoML within the field of vascular medicine is both promising and warranted. However, it necessitates additional financial support to realize its full potential.
Collapse
Affiliation(s)
- Yury Rusinovich
- Department of Vascular Surgery, University Hospital Leipzig, Leipzig, Germany
| | - Volha Rusinovich
- Institute of Hygiene and Environmental Medicine, University Hospital Leipzig, Germany
| | | | - Vitalii Liashko
- Department of Vascular Surgery, Charité University Hospital, Berlin, Germany
| | - Arsen Sabanov
- Department of Vascular Surgery, University Hospital Leipzig, Leipzig, Germany
| | - David J F Holstein
- Department of Vascular Surgery, University Hospital Leipzig, Leipzig, Germany
| | - Samer Aldmour
- Department of Vascular Surgery, University Hospital Leipzig, Leipzig, Germany
| | - Markus Doss
- Department of Vascular Surgery, University Hospital Leipzig, Leipzig, Germany
| | - Daniela Branzan
- Department of Vascular Surgery, University Hospital Leipzig, Leipzig, Germany
| |
Collapse
|
9
|
Parikh M, Tejaswi S, Girotra T, Chopra S, Ramai D, Tabibian JH, Jagannath S, Ofosu A, Barakat MT, Mishra R, Girotra M. Use of Artificial Intelligence in Lower Gastrointestinal and Small Bowel Disorders: An Update Beyond Polyp Detection. J Clin Gastroenterol 2025; 59:121-128. [PMID: 39774596 DOI: 10.1097/mcg.0000000000002115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/11/2025]
Abstract
Machine learning and its specialized forms, such as Artificial Neural Networks and Convolutional Neural Networks, are increasingly being used for detecting and managing gastrointestinal conditions. Recent advancements involve using Artificial Neural Network models to enhance predictive accuracy for severe lower gastrointestinal (LGI) bleeding outcomes, including the need for surgery. To this end, artificial intelligence (AI)-guided predictive models have shown promise in improving management outcomes. While much literature focuses on AI in early neoplasia detection, this review highlights AI's role in managing LGI and small bowel disorders, including risk stratification for LGI bleeding, quality control, evaluation of inflammatory bowel disease, and video capsule endoscopy reading. Overall, the integration of AI into routine clinical practice is still developing, with ongoing research aimed at addressing current limitations and gaps in patient care.
Collapse
Affiliation(s)
| | - Sooraj Tejaswi
- University of California, Davis
- Sutter Health, Sacramento
| | | | | | | | | | | | | | | | | | | |
Collapse
|
10
|
Maeda Y, Kudo SE, Kuroki T, Iacucci M. Automated Endoscopic Diagnosis in IBD: The Emerging Role of Artificial Intelligence. Gastrointest Endosc Clin N Am 2025; 35:213-233. [PMID: 39510689 DOI: 10.1016/j.giec.2024.04.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2024]
Abstract
The emerging role of artificial intelligence (AI) in automated endoscopic diagnosis represents a significant advancement in managing inflammatory bowel disease (IBD). AI technologies are increasingly being applied to endoscopic imaging to enhance the diagnosis, prediction of severity, and progression of IBD and dysplasia-associated colitis surveillance. These AI-assisted endoscopy aim to improve diagnostic accuracy, reduce variability of endoscopy imaging interpretations, and assist clinicians in decision-making processes. By leveraging AI, healthcare providers have the potential to offer more personalized and effective treatments, ultimately improving patient outcomes in IBD care.
Collapse
Affiliation(s)
- Yasuharu Maeda
- Digestive Disease Center, Showa University Northern Yokohama Hospital, 35-1 Chigasaki-chuo, Tsuzuki, Yokohama 224-8503, Japan; APC Microbiome Ireland, College of Medicine and Health, University College Cork, Cork T12 YT20, Ireland.
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, 35-1 Chigasaki-chuo, Tsuzuki, Yokohama 224-8503, Japan
| | - Takanori Kuroki
- Digestive Disease Center, Showa University Northern Yokohama Hospital, 35-1 Chigasaki-chuo, Tsuzuki, Yokohama 224-8503, Japan
| | - Marietta Iacucci
- APC Microbiome Ireland, College of Medicine and Health, University College Cork, Cork T12 YT20, Ireland
| |
Collapse
|
11
|
Cai C, Shi Q, Li J, Jiao Y, Xu A, Zhou Y, Wang X, Peng C, Zhang X, Cui X, Chen J, Xu J, Sun Q. Pathologist-level diagnosis of ulcerative colitis inflammatory activity level using an automated histological grading method. Int J Med Inform 2024; 192:105648. [PMID: 39396418 DOI: 10.1016/j.ijmedinf.2024.105648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 09/18/2024] [Accepted: 10/06/2024] [Indexed: 10/15/2024]
Abstract
BACKGROUND AND AIMS Inflammatory bowel disease (IBD) is a global disease that is evolving with increasing incidence. However, there are few works on computationally assisted diagnosis of IBD based on pathological images. Therefore, based on the UK and Chinese IBD diagnostic guidelines, our study established an artificial intelligence-assisted diagnostic system for histologic grading of inflammatory activity in ulcerative colitis (UC). METHODS We proposed an efficient deep-learning (DL) method for grading inflammatory activity in whole-slide images (WSIs) of UC pathology. Our model was constructed using 603 UC WSIs from Nanjing Drum Tower Hospital for model train set and internal test set. We collected 212 UC WSIs from Zhujiang Hospital as an external test set. Initially, the pre-trained ResNet50 model on the ImageNet dataset was employed to extract image patch features from UC patients. Subsequently, a multi-instance learning (MIL) approach with embedded self-attention was utilized to aggregate tissue image patch features, representing the entire WSI. Finally, the model was trained based on the aggregated features and WSI annotations provided by senior gastrointestinal pathologists to predict the level of inflammatory activity in UC WSIs. RESULTS In the task of distinguishing the presence or absence of inflammatory activity, the Area Under Curve (AUC) value in the internal test set is 0.863 (95% confidence interval [CI] 0.829, 0.898), with a sensitivity of 0.913 (95% [CI] 0.866, 0.961), and specificity of 0.816 (95% [CI] 0.771, 0.861). The AUC in the external test set is 0.947 (95% confidence interval [CI] 0.939, 0.955), with a sensitivity of 0.889 (905% [CI] 0.837, 0.940), and specificity of 0.858 (95% [CI] 0.777, 0.939). For distinguishing different levels of inflammatory activity in UC, the average Macro-AUC in the internal test set and the external test set are 0.827 (95% [CI] 0.803, 0.850) and 0.908 (95% [CI] 0.882, 0.935). the average Micro-AUC in the internal test set and the external test set are 0.816 (95% [CI] 0.792, 0.840) and 0.898 (95% [CI] 0.869, 0.926). CONCLUSIONS Comparative analysis with diagnoses made by pathologists at different expertise levels revealed that the algorithm reached a proficiency comparable to the pathologist with 5 years of experience. Furthermore, our algorithm performed superior to other MIL algorithms.
Collapse
Affiliation(s)
- Chengfei Cai
- School of Automation, Nanjing University of Information Science and Technology, Nanjing 21004, Jiangsu Province, China; Jiangsu Key Laboratory of Intelligent Medical Image Computing, School of Future Technology, Nanjing University of Information Science and Technology, Nanjing 21004, Jiangsu Province, China; College of Information Engineering, Taizhou University, Taizhou 225300, Jiangsu Province, China
| | - Qianyun Shi
- Department of Pathology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing 210008, Jiangsu Province, China
| | - Jun Li
- Jiangsu Key Laboratory of Intelligent Medical Image Computing, School of Future Technology, Nanjing University of Information Science and Technology, Nanjing 21004, Jiangsu Province, China
| | - Yiping Jiao
- Jiangsu Key Laboratory of Intelligent Medical Image Computing, School of Future Technology, Nanjing University of Information Science and Technology, Nanjing 21004, Jiangsu Province, China
| | - Andi Xu
- Department of Pathology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing 210008, Jiangsu Province, China
| | - Yangshu Zhou
- Department of Pathology, Zhujiang Hospital of Southern Medical University, Guangzhou 510280, Guangdong Province, China
| | - Xiangxue Wang
- Jiangsu Key Laboratory of Intelligent Medical Image Computing, School of Future Technology, Nanjing University of Information Science and Technology, Nanjing 21004, Jiangsu Province, China
| | - Chunyan Peng
- Department of Gastroenterology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing 210008, Jiangsu Province, China
| | - Xiaoqi Zhang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing 210008, Jiangsu Province, China
| | - Xiaobin Cui
- Department of Pathology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing 210008, Jiangsu Province, China
| | - Jun Chen
- Department of Pathology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing 210008, Jiangsu Province, China
| | - Jun Xu
- Jiangsu Key Laboratory of Intelligent Medical Image Computing, School of Future Technology, Nanjing University of Information Science and Technology, Nanjing 21004, Jiangsu Province, China.
| | - Qi Sun
- Department of Pathology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing 210008, Jiangsu Province, China; Center for Digestive Medicine, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing 210008, Jiangsu Province, China.
| |
Collapse
|
12
|
Xu Z, Rittscher J, Ali S. SSL-CPCD: Self-Supervised Learning With Composite Pretext-Class Discrimination for Improved Generalisability in Endoscopic Image Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:4105-4119. [PMID: 38857149 DOI: 10.1109/tmi.2024.3411933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2024]
Abstract
Data-driven methods have shown tremendous progress in medical image analysis. In this context, deep learning-based supervised methods are widely popular. However, they require a large amount of training data and face issues in generalisability to unseen datasets that hinder clinical translation. Endoscopic imaging data is characterised by large inter- and intra-patient variability that makes these models more challenging to learn representative features for downstream tasks. Thus, despite the publicly available datasets and datasets that can be generated within hospitals, most supervised models still underperform. While self-supervised learning has addressed this problem to some extent in natural scene data, there is a considerable performance gap in the medical image domain. In this paper, we propose to explore patch-level instance-group discrimination and penalisation of inter-class variation using additive angular margin within the cosine similarity metrics. Our novel approach enables models to learn to cluster similar representations, thereby improving their ability to provide better separation between different classes. Our results demonstrate significant improvement on all metrics over the state-of-the-art (SOTA) methods on the test set from the same and diverse datasets. We evaluated our approach for classification, detection, and segmentation. SSL-CPCD attains notable Top 1 accuracy of 79.77% in ulcerative colitis classification, an 88.62% mean average precision (mAP) for detection, and an 82.32% dice similarity coefficient for polyp segmentation tasks. These represent improvements of over 4%, 2%, and 3%, respectively, compared to the baseline architectures. We demonstrate that our method generalises better than all SOTA methods to unseen datasets, reporting over 7% improvement.
Collapse
|
13
|
Khan DZ, Valetopoulou A, Das A, Hanrahan JG, Williams SC, Bano S, Borg A, Dorward NL, Barbarisi S, Culshaw L, Kerr K, Luengo I, Stoyanov D, Marcus HJ. Artificial intelligence assisted operative anatomy recognition in endoscopic pituitary surgery. NPJ Digit Med 2024; 7:314. [PMID: 39521895 PMCID: PMC11550325 DOI: 10.1038/s41746-024-01273-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2024] [Accepted: 09/26/2024] [Indexed: 11/16/2024] Open
Abstract
Pituitary tumours are surrounded by critical neurovascular structures and identification of these intra-operatively can be challenging. We have previously developed an AI model capable of sellar anatomy segmentation. This study aims to apply this model, and explore the impact of AI-assistance on clinician anatomy recognition. Participants were tasked with labelling the sella on six images, initially without assistance, then augmented by AI. Mean DICE scores and the proportion of annotations encompassing the centroid of the sella were calculated. Six medical students, six junior trainees, six intermediate trainees and six experts were recruited. There was an overall improvement in sella recognition from a DICE of score 70.7% without AI assistance to 77.5% with AI assistance (+6.7; p < 0.001). Medical students used and benefitted from AI assistance the most, improving from a DICE score of 66.2% to 78.9% (+12.8; p = 0.02). This technology has the potential to augment surgical education and eventually be used as an intra-operative decision support tool.
Collapse
Affiliation(s)
- Danyal Z Khan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK.
- Hawkes Centre, Department of Computer Science, University College London, London, UK.
| | - Alexandra Valetopoulou
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
- Hawkes Centre, Department of Computer Science, University College London, London, UK
| | - Adrito Das
- Hawkes Centre, Department of Computer Science, University College London, London, UK
| | - John G Hanrahan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
- Hawkes Centre, Department of Computer Science, University College London, London, UK
| | - Simon C Williams
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
- Hawkes Centre, Department of Computer Science, University College London, London, UK
| | - Sophia Bano
- Hawkes Centre, Department of Computer Science, University College London, London, UK
| | - Anouk Borg
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Neil L Dorward
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | | | | | - Karen Kerr
- Digital Surgery Ltd, Medtronic, London, UK
| | | | - Danail Stoyanov
- Hawkes Centre, Department of Computer Science, University College London, London, UK
- Digital Surgery Ltd, Medtronic, London, UK
| | - Hani J Marcus
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK.
- Hawkes Centre, Department of Computer Science, University College London, London, UK.
| |
Collapse
|
14
|
Derks MEW, te Groen M, van Lierop LMA, Murthy S, Rubin DT, Bessissow T, Nagtegaal ID, Bemelman WA, Derikx LAAP, Hoentjen F. Management of Colorectal Neoplasia in IBD Patients: Current Practice and Future Perspectives. J Crohns Colitis 2024; 18:1726-1735. [PMID: 38741227 PMCID: PMC11479698 DOI: 10.1093/ecco-jcc/jjae071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 02/29/2024] [Accepted: 05/13/2024] [Indexed: 05/16/2024]
Abstract
Inflammatory bowel disease [IBD] patients are at increased risk of developing colorectal neoplasia [CRN]. In this review, we aim to provide an up-to-date overview and future perspectives on CRN management in IBD. Advances in endoscopic surveillance and resection techniques have resulted in a shift towards endoscopic management of neoplastic lesions in place of surgery. Endoscopic treatment is recommended for all CRN if complete resection is feasible. Standard [cold snare] polypectomy, endoscopic mucosal resection and endoscopic submucosal dissection should be performed depending on lesion complexity [size, delineation, morphology, surface architecture, submucosal fibrosis/invasion] to maximise the likelihood of complete resection. If complete resection is not feasible, surgical treatment options should be discussed by a multidisciplinary team. Whereas [sub]total and proctocolectomy play an important role in management of endoscopically unresectable CRN, partial colectomy may be considered in a subgroup of patients in endoscopic remission with limited disease extent without other CRN risk factors. High synchronous and metachronous CRN rates warrant careful mucosal visualisation with shortened intervals for at least 5 years after treatment of CRN.
Collapse
Affiliation(s)
- Monica E W Derks
- Inflammatory Bowel Disease Center, Department of Gastroenterology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Maarten te Groen
- Inflammatory Bowel Disease Center, Department of Gastroenterology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Lisa M A van Lierop
- Inflammatory Bowel Disease Center, Department of Gastroenterology, Radboud University Medical Center, Nijmegen, The Netherlands
- Division of Gastroenterology, Department of Medicine, University of Alberta, Edmonton, AB, Canada
| | - Sanjay Murthy
- Ottawa Hospital IBD Center and Department of Medicine, University of Ottawa, Ottawa, ON, Canada
| | - David T Rubin
- University of Chicago Medicine Inflammatory Bowel Disease Center, University of Chicago, Chicago, IL, USA
| | - Talat Bessissow
- Division of Gastroenterology, Department of Medicine, McGill University Health Center, Montreal, QC, Canada
| | - Iris D Nagtegaal
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Willem A Bemelman
- Department of Surgery, Amsterdam University Medical Center, AMC, Amsterdam, The Netherlands
| | | | - Frank Hoentjen
- Inflammatory Bowel Disease Center, Department of Gastroenterology, Radboud University Medical Center, Nijmegen, The Netherlands
- Division of Gastroenterology, Department of Medicine, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
15
|
Bhattacharya D, Reuter K, Behrendt F, Maack L, Grube S, Schlaefer A. PolypNextLSTM: a lightweight and fast polyp video segmentation network using ConvNext and ConvLSTM. Int J Comput Assist Radiol Surg 2024; 19:2111-2119. [PMID: 39115609 PMCID: PMC11442634 DOI: 10.1007/s11548-024-03244-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 07/18/2024] [Indexed: 10/02/2024]
Abstract
PURPOSE Commonly employed in polyp segmentation, single-image UNet architectures lack the temporal insight clinicians gain from video data in diagnosing polyps. To mirror clinical practices more faithfully, our proposed solution, PolypNextLSTM, leverages video-based deep learning, harnessing temporal information for superior segmentation performance with least parameter overhead, making it possibly suitable for edge devices. METHODS PolypNextLSTM employs a UNet-like structure with ConvNext-Tiny as its backbone, strategically omitting the last two layers to reduce parameter overhead. Our temporal fusion module, a Convolutional Long Short Term Memory (ConvLSTM), effectively exploits temporal features. Our primary novelty lies in PolypNextLSTM, which stands out as the leanest in parameters and the fastest model, surpassing the performance of five state-of-the-art image and video-based deep learning models. The evaluation of the SUN-SEG dataset spans easy-to-detect and hard-to-detect polyp scenarios, along with videos containing challenging artefacts like fast motion and occlusion. RESULTS Comparison against 5 image-based and 5 video-based models demonstrates PolypNextLSTM's superiority, achieving a Dice score of 0.7898 on the hard-to-detect polyp test set, surpassing image-based PraNet (0.7519) and video-based PNS+ (0.7486). Notably, our model excels in videos featuring complex artefacts such as ghosting and occlusion. CONCLUSION PolypNextLSTM, integrating pruned ConvNext-Tiny with ConvLSTM for temporal fusion, not only exhibits superior segmentation performance but also maintains the highest frames per speed among evaluated models. Code can be found here: https://github.com/mtec-tuhh/PolypNextLSTM .
Collapse
Affiliation(s)
- Debayan Bhattacharya
- Institute of Medical Technology and Intelligent Systems, Technische Universitaet Hamburg, Hamburg, Germany
| | - Konrad Reuter
- Institute of Medical Technology and Intelligent Systems, Technische Universitaet Hamburg, Hamburg, Germany.
| | - Finn Behrendt
- Institute of Medical Technology and Intelligent Systems, Technische Universitaet Hamburg, Hamburg, Germany
| | - Lennart Maack
- Institute of Medical Technology and Intelligent Systems, Technische Universitaet Hamburg, Hamburg, Germany
| | - Sarah Grube
- Institute of Medical Technology and Intelligent Systems, Technische Universitaet Hamburg, Hamburg, Germany
| | - Alexander Schlaefer
- Institute of Medical Technology and Intelligent Systems, Technische Universitaet Hamburg, Hamburg, Germany
| |
Collapse
|
16
|
Tudela Y, Majó M, de la Fuente N, Galdran A, Krenzer A, Puppe F, Yamlahi A, Tran TN, Matuszewski BJ, Fitzgerald K, Bian C, Pan J, Liu S, Fernández-Esparrach G, Histace A, Bernal J. A complete benchmark for polyp detection, segmentation and classification in colonoscopy images. Front Oncol 2024; 14:1417862. [PMID: 39381041 PMCID: PMC11458519 DOI: 10.3389/fonc.2024.1417862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 07/11/2024] [Indexed: 10/10/2024] Open
Abstract
Introduction Colorectal cancer (CRC) is one of the main causes of deaths worldwide. Early detection and diagnosis of its precursor lesion, the polyp, is key to reduce its mortality and to improve procedure efficiency. During the last two decades, several computational methods have been proposed to assist clinicians in detection, segmentation and classification tasks but the lack of a common public validation framework makes it difficult to determine which of them is ready to be deployed in the exploration room. Methods This study presents a complete validation framework and we compare several methodologies for each of the polyp characterization tasks. Results Results show that the majority of the approaches are able to provide good performance for the detection and segmentation task, but that there is room for improvement regarding polyp classification. Discussion While studied show promising results in the assistance of polyp detection and segmentation tasks, further research should be done in classification task to obtain reliable results to assist the clinicians during the procedure. The presented framework provides a standarized method for evaluating and comparing different approaches, which could facilitate the identification of clinically prepared assisting methods.
Collapse
Affiliation(s)
- Yael Tudela
- Computer Vision Center and Computer Science Department, Universitat Autònoma de Cerdanyola del Valles, Barcelona, Spain
| | - Mireia Majó
- Computer Vision Center and Computer Science Department, Universitat Autònoma de Cerdanyola del Valles, Barcelona, Spain
| | - Neil de la Fuente
- Computer Vision Center and Computer Science Department, Universitat Autònoma de Cerdanyola del Valles, Barcelona, Spain
| | - Adrian Galdran
- Department of Information and Communication Technologies, SymBioSys Research Group, BCNMedTech, Barcelona, Spain
| | - Adrian Krenzer
- Artificial Intelligence and Knowledge Systems, Institute for Computer Science, Julius-Maximilians University of Würzburg, Würzburg, Germany
| | - Frank Puppe
- Artificial Intelligence and Knowledge Systems, Institute for Computer Science, Julius-Maximilians University of Würzburg, Würzburg, Germany
| | - Amine Yamlahi
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Thuy Nuong Tran
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Bogdan J. Matuszewski
- Computer Vision and Machine Learning (CVML) Research Group, University of Central Lancashir (UCLan), Preston, United Kingdom
| | - Kerr Fitzgerald
- Computer Vision and Machine Learning (CVML) Research Group, University of Central Lancashir (UCLan), Preston, United Kingdom
| | - Cheng Bian
- Hebei University of Technology, Baoding, China
| | | | - Shijle Liu
- Hebei University of Technology, Baoding, China
| | | | - Aymeric Histace
- ETIS UMR 8051, École Nationale Supérieure de l'Électronique et de ses Applications (ENSEA), Centre national de la recherche scientifique (CNRS), CY Paris Cergy University, Cergy, France
| | - Jorge Bernal
- Computer Vision Center and Computer Science Department, Universitat Autònoma de Cerdanyola del Valles, Barcelona, Spain
| |
Collapse
|
17
|
Jiang Y, Zhang Z, Hu Y, Li G, Wan X, Wu S, Cui S, Huang S, Li Z. ECC-PolypDet: Enhanced CenterNet With Contrastive Learning for Automatic Polyp Detection. IEEE J Biomed Health Inform 2024; 28:4785-4796. [PMID: 37983159 DOI: 10.1109/jbhi.2023.3334240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
Accurate polyp detection is critical for early colorectal cancer diagnosis. Although remarkable progress has been achieved in recent years, the complex colon environment and concealed polyps with unclear boundaries still pose severe challenges in this area. Existing methods either involve computationally expensive context aggregation or lack prior modeling of polyps, resulting in poor performance in challenging cases. In this paper, we propose the Enhanced CenterNet with Contrastive Learning (ECC-PolypDet), a two-stage training & end-to-end inference framework that leverages images and bounding box annotations to train a general model and fine-tune it based on the inference score to obtain a final robust model. Specifically, we conduct Box-assisted Contrastive Learning (BCL) during training to minimize the intra-class difference and maximize the inter-class difference between foreground polyps and backgrounds, enabling our model to capture concealed polyps. Moreover, to enhance the recognition of small polyps, we design the Semantic Flow-guided Feature Pyramid Network (SFFPN) to aggregate multi-scale features and the Heatmap Propagation (HP) module to boost the model's attention on polyp targets. In the fine-tuning stage, we introduce the IoU-guided Sample Re-weighting (ISR) mechanism to prioritize hard samples by adaptively adjusting the loss weight for each sample during fine-tuning. Extensive experiments on six large-scale colonoscopy datasets demonstrate the superiority of our model compared with previous state-of-the-art detectors.
Collapse
|
18
|
Marchese Aizenman G, Salvagnini P, Cherubini A, Biffi C. Assessing clinical efficacy of polyp detection models using open-access datasets. Front Oncol 2024; 14:1422942. [PMID: 39148908 PMCID: PMC11324571 DOI: 10.3389/fonc.2024.1422942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Accepted: 07/08/2024] [Indexed: 08/17/2024] Open
Abstract
Background Ensuring accurate polyp detection during colonoscopy is essential for preventing colorectal cancer (CRC). Recent advances in deep learning-based computer-aided detection (CADe) systems have shown promise in enhancing endoscopists' performances. Effective CADe systems must achieve high polyp detection rates from the initial seconds of polyp appearance while maintaining low false positive (FP) detection rates throughout the procedure. Method We integrated four open-access datasets into a unified platform containing over 340,000 images from various centers, including 380 annotated polyps, with distinct data splits for comprehensive model development and benchmarking. The REAL-Colon dataset, comprising 60 full-procedure colonoscopy videos from six centers, is used as the fifth dataset of the platform to simulate clinical conditions for model evaluation on unseen center data. Performance assessment includes traditional object detection metrics and new metrics that better meet clinical needs. Specifically, by defining detection events as sequences of consecutive detections, we compute per-polyp recall at early detection stages and average per-patient FPs, enabling the generation of Free-Response Receiver Operating Characteristic (FROC) curves. Results Using YOLOv7, we trained and tested several models across the proposed data splits, showcasing the robustness of our open-access platform for CADe system development and benchmarking. The introduction of new metrics allows for the optimization of CADe operational parameters based on clinically relevant criteria, such as per-patient FPs and early polyp detection. Our findings also reveal that omitting full-procedure videos leads to non-realistic assessments and that detecting small polyp bounding boxes poses the greatest challenge. Conclusion This study demonstrates how newly available open-access data supports ongoing research progress in environments that closely mimic clinical settings. The introduced metrics and FROC curves illustrate CADe clinical efficacy and can aid in tuning CADe hyperparameters.
Collapse
Affiliation(s)
| | | | - Andrea Cherubini
- Cosmo Intelligent Medical Devices, Dublin, Ireland
- Milan Center for Neuroscience, University of Milano-Bicocca, Milano, Italy
| | - Carlo Biffi
- Cosmo Intelligent Medical Devices, Dublin, Ireland
| |
Collapse
|
19
|
Ruano J, Gómez M, Romero E, Manzanera A. Leveraging a realistic synthetic database to learn Shape-from-Shading for estimating the colon depth in colonoscopy images. Comput Med Imaging Graph 2024; 115:102390. [PMID: 38714018 DOI: 10.1016/j.compmedimag.2024.102390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 03/30/2024] [Accepted: 04/25/2024] [Indexed: 05/09/2024]
Abstract
Colonoscopy is the choice procedure to diagnose, screening, and treat the colon and rectum cancer, from early detection of small precancerous lesions (polyps), to confirmation of malign masses. However, the high variability of the organ appearance and the complex shape of both the colon wall and structures of interest make this exploration difficult. Learned visuospatial and perceptual abilities mitigate technical limitations in clinical practice by proper estimation of the intestinal depth. This work introduces a novel methodology to estimate colon depth maps in single frames from monocular colonoscopy videos. The generated depth map is inferred from the shading variation of the colon wall with respect to the light source, as learned from a realistic synthetic database. Briefly, a classic convolutional neural network architecture is trained from scratch to estimate the depth map, improving sharp depth estimations in haustral folds and polyps by a custom loss function that minimizes the estimation error in edges and curvatures. The network was trained by a custom synthetic colonoscopy database herein constructed and released, composed of 248400 frames (47 videos), with depth annotations at the level of pixels. This collection comprehends 5 subsets of videos with progressively higher levels of visual complexity. Evaluation of the depth estimation with the synthetic database reached a threshold accuracy of 95.65%, and a mean-RMSE of 0.451cm, while a qualitative assessment with a real database showed consistent depth estimations, visually evaluated by the expert gastroenterologist coauthoring this paper. Finally, the method achieved competitive performance with respect to another state-of-the-art method using a public synthetic database and comparable results in a set of images with other five state-of-the-art methods. Additionally, three-dimensional reconstructions demonstrated useful approximations of the gastrointestinal tract geometry. Code for reproducing the reported results and the dataset are available at https://github.com/Cimalab-unal/ColonDepthEstimation.
Collapse
Affiliation(s)
- Josué Ruano
- Computer Imaging and Medical Applications Laboratory (CIM@LAB), Universidad Nacional de Colombia, 111321, Bogotá, Colombia
| | - Martín Gómez
- Unidad de Gastroenterología, Hospital Universitario Nacional, 111321, Bogotá, Colombia
| | - Eduardo Romero
- Computer Imaging and Medical Applications Laboratory (CIM@LAB), Universidad Nacional de Colombia, 111321, Bogotá, Colombia.
| | - Antoine Manzanera
- Unité d'Informatique et d'Ingénierie des Systémes (U2IS), ENSTA Paris, Institut Polytechnique de Paris, Palaiseau, 91762, Ile de France, France
| |
Collapse
|
20
|
Shimada Y, Ojima T, Takaoka Y, Sugano A, Someya Y, Hirabayashi K, Homma T, Kitamura N, Akemoto Y, Tanabe K, Sato F, Yoshimura N, Tsuchiya T. Prediction of visceral pleural invasion of clinical stage I lung adenocarcinoma using thoracoscopic images and deep learning. Surg Today 2024; 54:540-550. [PMID: 37864054 DOI: 10.1007/s00595-023-02756-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 09/13/2023] [Indexed: 10/22/2023]
Abstract
PURPOSE To develop deep learning models using thoracoscopic images to identify visceral pleural invasion (VPI) in patients with clinical stage I lung adenocarcinoma, and to verify if these models can be applied clinically. METHODS Two deep learning models, one based on a convolutional neural network (CNN) and the other based on a vision transformer (ViT), were applied and trained via 463 images (VPI negative: 269 images, VPI positive: 194 images) captured from surgical videos of 81 patients. Model performances were validated via an independent test dataset containing 46 images (VPI negative: 28 images, VPI positive: 18 images) from 46 test patients. RESULTS The areas under the receiver operating characteristic curves of the CNN-based and ViT-based models were 0.77 and 0.84 (p = 0.304), respectively. The accuracy, sensitivity, specificity, and positive and negative predictive values were 73.91, 83.33, 67.86, 62.50, and 86.36% for the CNN-based model and 78.26, 77.78, 78.57, 70.00, and 84.62% for the ViT-based model, respectively. These models' diagnostic abilities were comparable to those of board-certified thoracic surgeons and tended to be superior to those of non-board-certified thoracic surgeons. CONCLUSION The deep learning model systems can be utilized in clinical applications via data expansion.
Collapse
Affiliation(s)
- Yoshifumi Shimada
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Toshihiro Ojima
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Yutaka Takaoka
- Data Science Center for Medicine and Hospital Management, Toyama University Hospital, 2630 Sugitani, Toyama, Japan
- Center for Data Science and Artificial Intelligence Research Promotion, Toyama University Hospital, 2630 Sugitani, Toyama, Japan
| | - Aki Sugano
- Data Science Center for Medicine and Hospital Management, Toyama University Hospital, 2630 Sugitani, Toyama, Japan
- Center for Clinical Research, Toyama University Hospital, 2630 Sugitani, Toyama, Japan
| | - Yoshiaki Someya
- Center for Data Science and Artificial Intelligence Research Promotion, Toyama University Hospital, 2630 Sugitani, Toyama, Japan
| | - Kenichi Hirabayashi
- Department of Diagnostic Pathology, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Takahiro Homma
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Naoya Kitamura
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Yushi Akemoto
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Keitaro Tanabe
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Fumitaka Sato
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Naoki Yoshimura
- Department of Cardiovascular Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Tomoshi Tsuchiya
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan.
| |
Collapse
|
21
|
Ruano J, Bravo D, Giraldo D, Gómez M, González FA, Manzanera A, Romero E. Estimating Polyp Size From a Single Colonoscopy Image Using a Shape-From-Shading Model. 2024 IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI) 2024:1-5. [DOI: 10.1109/isbi56570.2024.10635358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Affiliation(s)
- Josué Ruano
- Computer Imaging and Medical Applications Laboratory (CIM@LAB)
| | - Diego Bravo
- Computer Imaging and Medical Applications Laboratory (CIM@LAB)
| | - Diana Giraldo
- Computer Imaging and Medical Applications Laboratory (CIM@LAB)
| | - Martín Gómez
- Hospital Universitario Nacional de Colombia,Unidad de Gastroenterología,Bogotá,Colombia
| | | | - Antoine Manzanera
- Unité d’Informatique et d’Ingénierie des Systémes,ENSTA-Institut Polytechnique de Paris,France
| | - Eduardo Romero
- Computer Imaging and Medical Applications Laboratory (CIM@LAB)
| |
Collapse
|
22
|
Biffi C, Antonelli G, Bernhofer S, Hassan C, Hirata D, Iwatate M, Maieron A, Salvagnini P, Cherubini A. REAL-Colon: A dataset for developing real-world AI applications in colonoscopy. Sci Data 2024; 11:539. [PMID: 38796533 PMCID: PMC11127922 DOI: 10.1038/s41597-024-03359-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 05/10/2024] [Indexed: 05/28/2024] Open
Abstract
Detection and diagnosis of colon polyps are key to preventing colorectal cancer. Recent evidence suggests that AI-based computer-aided detection (CADe) and computer-aided diagnosis (CADx) systems can enhance endoscopists' performance and boost colonoscopy effectiveness. However, most available public datasets primarily consist of still images or video clips, often at a down-sampled resolution, and do not accurately represent real-world colonoscopy procedures. We introduce the REAL-Colon (Real-world multi-center Endoscopy Annotated video Library) dataset: a compilation of 2.7 M native video frames from sixty full-resolution, real-world colonoscopy recordings across multiple centers. The dataset contains 350k bounding-box annotations, each created under the supervision of expert gastroenterologists. Comprehensive patient clinical data, colonoscopy acquisition information, and polyp histopathological information are also included in each video. With its unprecedented size, quality, and heterogeneity, the REAL-Colon dataset is a unique resource for researchers and developers aiming to advance AI research in colonoscopy. Its openness and transparency facilitate rigorous and reproducible research, fostering the development and benchmarking of more accurate and reliable colonoscopy-related algorithms and models.
Collapse
Affiliation(s)
- Carlo Biffi
- Cosmo Intelligent Medical Devices, Dublin, Ireland.
| | - Giulio Antonelli
- Gastroenterology and Digestive Endoscopy Unit, Ospedale dei Castelli (N.O.C.), Rome, Italy
| | - Sebastian Bernhofer
- Karl Landsteiner University of Health Sciences, Krems, Austria
- Department of Internal Medicine 2, University Hospital St. Pölten, St. Pölten, Austria
| | - Cesare Hassan
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Italy
- Endoscopy Unit, Humanitas Clinical and Research Center IRCCS, Rozzano, Italy
| | - Daizen Hirata
- Gastrointestinal Center, Sano Hospital, Hyogo, Japan
| | - Mineo Iwatate
- Gastrointestinal Center, Sano Hospital, Hyogo, Japan
| | - Andreas Maieron
- Karl Landsteiner University of Health Sciences, Krems, Austria
- Department of Internal Medicine 2, University Hospital St. Pölten, St. Pölten, Austria
| | | | - Andrea Cherubini
- Cosmo Intelligent Medical Devices, Dublin, Ireland.
- Milan Center for Neuroscience, University of Milano-Bicocca, Milano, Italy.
| |
Collapse
|
23
|
Li N, Yang J, Li X, Shi Y, Wang K. Accuracy of artificial intelligence-assisted endoscopy in the diagnosis of gastric intestinal metaplasia: A systematic review and meta-analysis. PLoS One 2024; 19:e0303421. [PMID: 38743709 PMCID: PMC11093381 DOI: 10.1371/journal.pone.0303421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 04/25/2024] [Indexed: 05/16/2024] Open
Abstract
BACKGROUND AND AIMS Gastric intestinal metaplasia is a precancerous disease, and a timely diagnosis is essential to delay or halt cancer progression. Artificial intelligence (AI) has found widespread application in the field of disease diagnosis. This study aimed to conduct a comprehensive evaluation of AI's diagnostic accuracy in detecting gastric intestinal metaplasia in endoscopy, compare it to endoscopists' ability, and explore the main factors affecting AI's performance. METHODS The study followed the PRISMA-DTA guidelines, and the PubMed, Embase, Web of Science, Cochrane, and IEEE Xplore databases were searched to include relevant studies published by October 2023. We extracted the key features and experimental data of each study and combined the sensitivity and specificity metrics by meta-analysis. We then compared the diagnostic ability of the AI versus the endoscopists using the same test data. RESULTS Twelve studies with 11,173 patients were included, demonstrating AI models' efficacy in diagnosing gastric intestinal metaplasia. The meta-analysis yielded a pooled sensitivity of 94% (95% confidence interval: 0.92-0.96) and specificity of 93% (95% confidence interval: 0.89-0.95). The combined area under the receiver operating characteristics curve was 0.97. The results of meta-regression and subgroup analysis showed that factors such as study design, endoscopy type, number of training images, and algorithm had a significant effect on the diagnostic performance of AI. The AI exhibited a higher diagnostic capacity than endoscopists (sensitivity: 95% vs. 79%). CONCLUSIONS AI-aided diagnosis of gastric intestinal metaplasia using endoscopy showed high performance and clinical diagnostic value. However, further prospective studies are required to validate these findings.
Collapse
Affiliation(s)
- Na Li
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| | - Jian Yang
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| | - Xiaodong Li
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| | - Yanting Shi
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| | - Kunhong Wang
- Department of Gastroenterology, Zibo Central Hospital, Zibo, Shandong, China
| |
Collapse
|
24
|
Okumura T, Imai K, Misawa M, Kudo SE, Hotta K, Ito S, Kishida Y, Takada K, Kawata N, Maeda Y, Yoshida M, Yamamoto Y, Minamide T, Ishiwatari H, Sato J, Matsubayashi H, Ono H. Evaluating false-positive detection in a computer-aided detection system for colonoscopy. J Gastroenterol Hepatol 2024; 39:927-934. [PMID: 38273460 DOI: 10.1111/jgh.16491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 12/21/2023] [Accepted: 01/03/2024] [Indexed: 01/27/2024]
Abstract
BACKGROUND AND AIM Computer-aided detection (CADe) systems can efficiently detect polyps during colonoscopy. However, false-positive (FP) activation is a major limitation of CADe. We aimed to compare the rate and causes of FP using CADe before and after an update designed to reduce FP. METHODS We analyzed CADe-assisted colonoscopy videos recorded between July 2022 and October 2022. The number and causes of FPs and excessive time spent by the endoscopist on FP (ET) were compared pre- and post-update using 1:1 propensity score matching. RESULTS During the study period, 191 colonoscopy videos (94 and 97 in the pre- and post-update groups, respectively) were recorded. Propensity score matching resulted in 146 videos (73 in each group). The mean number of FPs and median ET per colonoscopy were significantly lower in the post-update group than those in the pre-update group (4.2 ± 3.7 vs 18.1 ± 11.1; P < 0.001 and 0 vs 16 s; P < 0.001, respectively). Mucosal tags, bubbles, and folds had the strongest association with decreased FP post-update (pre-update vs post-update: 4.3 ± 3.6 vs 0.4 ± 0.8, 0.32 ± 0.70 vs 0.04 ± 0.20, and 8.6 ± 6.7 vs 1.6 ± 1.7, respectively). There was no significant decrease in the true positive rate (post-update vs pre-update: 95.0% vs 99.2%; P = 0.09) or the adenoma detection rate (post-update vs pre-update: 52.1% vs 49.3%; P = 0.87). CONCLUSIONS The updated CADe can reduce FP without impairing polyp detection. A reduction in FP may help relieve the burden on endoscopists.
Collapse
Affiliation(s)
- Taishi Okumura
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Kenichiro Imai
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Kinichi Hotta
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Sayo Ito
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | | | - Kazunori Takada
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Noboru Kawata
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Yuki Maeda
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Masao Yoshida
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | - Yoichi Yamamoto
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | | | | | - Junya Sato
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| | | | - Hiroyuki Ono
- Division of Endoscopy, Shizuoka Cancer Center, Shizuoka, Japan
| |
Collapse
|
25
|
Kobayashi R, Yoshida N, Tomita Y, Hashimoto H, Inoue K, Hirose R, Dohi O, Inada Y, Murakami T, Morimoto Y, Zhu X, Itoh Y. Detailed Superiority of the CAD EYE Artificial Intelligence System over Endoscopists for Lesion Detection and Characterization Using Unique Movie Sets. J Anus Rectum Colon 2024; 8:61-69. [PMID: 38689788 PMCID: PMC11056537 DOI: 10.23922/jarc.2023-041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 11/22/2023] [Indexed: 05/02/2024] Open
Abstract
Objectives Detailed superiority of CAD EYE (Fujifilm, Tokyo, Japan), an artificial intelligence for polyp detection/diagnosis, compared to endoscopists is not well examined. We examined endoscopist's ability using movie sets of colorectal lesions which were detected and diagnosed by CAD EYE accurately. Methods Consecutive lesions of ≤10 mm were examined live by CAD EYE from March-June 2022 in our institution. Short unique movie sets of each lesion with and without CAD EYE were recorded simultaneously using two recorders for detection under white light imaging (WLI) and linked color imaging (LCI) and diagnosis under blue laser/light imaging (BLI). Excluding inappropriate movies, 100 lesions detected and diagnosed with CAD EYE accurately were evaluated. Movies without CAD EYE were evaluated first by three trainees and three experts. Subsequently, movies with CAD EYE were examined. The rates of accurate detection and diagnosis were evaluated for both movie sets. Results Among 100 lesions (mean size: 4.7±2.6 mm; 67 neoplastic/33 hyperplastic), mean accurate detection rates of movies without or with CAD EYE were 78.7%/96.7% under WLI (p<0.01) and 91.3%/97.3% under LCI (p<0.01) for trainees and 85.3%/99.0% under WLI (p<0.01) and 92.6%/99.3% under LCI (p<0.01) for experts. Mean accurate diagnosis rates of movies without or with CAD EYE for BLI were 85.3%/100% for trainees (p<0.01) and 92.3%/100% for experts (p<0.01), respectively. The significant risk factors of not-detected lesions for trainees were right-sided, hyperplastic, not-reddish, in the corner, halation, and inadequate bowel preparation. Conclusions Unique movie sets with and without CAD EYE could suggest it's efficacy for lesion detection/diagnosis.
Collapse
Affiliation(s)
- Reo Kobayashi
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Naohisa Yoshida
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Yuri Tomita
- Department of Gastroenterology, Kosekai Takeda Hosptal, Kyoto, Japan
| | - Hikaru Hashimoto
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Ken Inoue
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Ryohei Hirose
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Osamu Dohi
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Yutaka Inada
- Department of Gastroenterology, Kyoto First Red Cross Hospital, Kyoto, Japan
| | - Takaaki Murakami
- Department of Gastroenterology, Aiseikai Yamashina Hospital, Kyoto, Japan
| | - Yasutaka Morimoto
- Department of Gastroenterology, Kyoto Saiseikai Hospital, Kyoto, Japan
| | - Xin Zhu
- Graduate School of Computer Science and Engineering, The University of Aizu, Fukushima, Japan
| | - Yoshito Itoh
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| |
Collapse
|
26
|
Carmichael J, Costanza E, Blandford A, Struyven R, Keane PA, Balaskas K. Diagnostic decisions of specialist optometrists exposed to ambiguous deep-learning outputs. Sci Rep 2024; 14:6775. [PMID: 38514657 PMCID: PMC10958016 DOI: 10.1038/s41598-024-55410-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 02/23/2024] [Indexed: 03/23/2024] Open
Abstract
Artificial intelligence (AI) has great potential in ophthalmology. We investigated how ambiguous outputs from an AI diagnostic support system (AI-DSS) affected diagnostic responses from optometrists when assessing cases of suspected retinal disease. Thirty optometrists (15 more experienced, 15 less) assessed 30 clinical cases. For ten, participants saw an optical coherence tomography (OCT) scan, basic clinical information and retinal photography ('no AI'). For another ten, they were also given AI-generated OCT-based probabilistic diagnoses ('AI diagnosis'); and for ten, both AI-diagnosis and AI-generated OCT segmentations ('AI diagnosis + segmentation') were provided. Cases were matched across the three types of presentation and were selected to include 40% ambiguous and 20% incorrect AI outputs. Optometrist diagnostic agreement with the predefined reference standard was lowest for 'AI diagnosis + segmentation' (204/300, 68%) compared to 'AI diagnosis' (224/300, 75% p = 0.010), and 'no Al' (242/300, 81%, p = < 0.001). Agreement with AI diagnosis consistent with the reference standard decreased (174/210 vs 199/210, p = 0.003), but participants trusted the AI more (p = 0.029) with segmentations. Practitioner experience did not affect diagnostic responses (p = 0.24). More experienced participants were more confident (p = 0.012) and trusted the AI less (p = 0.038). Our findings also highlight issues around reference standard definition.
Collapse
Affiliation(s)
- Josie Carmichael
- University College London Interaction Centre (UCLIC), UCL, London, UK.
- Institute of Ophthalmology, NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL, London, UK.
| | - Enrico Costanza
- University College London Interaction Centre (UCLIC), UCL, London, UK
| | - Ann Blandford
- University College London Interaction Centre (UCLIC), UCL, London, UK
| | - Robbert Struyven
- Institute of Ophthalmology, NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL, London, UK
| | - Pearse A Keane
- Institute of Ophthalmology, NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL, London, UK
| | - Konstantinos Balaskas
- Institute of Ophthalmology, NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL, London, UK
| |
Collapse
|
27
|
Wang H, Hu T, Zhang Y, Zhang H, Qi Y, Wang L, Ma J, Du M. Unveiling camouflaged and partially occluded colorectal polyps: Introducing CPSNet for accurate colon polyp segmentation. Comput Biol Med 2024; 171:108186. [PMID: 38394804 DOI: 10.1016/j.compbiomed.2024.108186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 02/02/2024] [Accepted: 02/18/2024] [Indexed: 02/25/2024]
Abstract
BACKGROUND Segmenting colorectal polyps presents a significant challenge due to the diverse variations in their size, shape, texture, and intricate backgrounds. Particularly demanding are the so-called "camouflaged" polyps, which are partially concealed by surrounding tissues or fluids, adding complexity to their detection. METHODS We present CPSNet, an innovative model designed for camouflaged polyp segmentation. CPSNet incorporates three key modules: the Deep Multi-Scale-Feature Fusion Module, the Camouflaged Object Detection Module, and the Multi-Scale Feature Enhancement Module. These modules work collaboratively to improve the segmentation process, enhancing both robustness and accuracy. RESULTS Our experiments confirm the effectiveness of CPSNet. When compared to state-of-the-art methods in colon polyp segmentation, CPSNet consistently outperforms the competition. Particularly noteworthy is its performance on the ETIS-LaribPolypDB dataset, where CPSNet achieved a remarkable 2.3% increase in the Dice coefficient compared to the Polyp-PVT model. CONCLUSION In summary, CPSNet marks a significant advancement in the field of colorectal polyp segmentation. Its innovative approach, encompassing multi-scale feature fusion, camouflaged object detection, and feature enhancement, holds considerable promise for clinical applications.
Collapse
Affiliation(s)
- Huafeng Wang
- School of Information Technology, North China University of Technology, Beijing 100041, China.
| | - Tianyu Hu
- School of Information Technology, North China University of Technology, Beijing 100041, China.
| | - Yanan Zhang
- School of Information Technology, North China University of Technology, Beijing 100041, China.
| | - Haodu Zhang
- School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou 510335, China.
| | - Yong Qi
- School of Information Technology, North China University of Technology, Beijing 100041, China.
| | - Longzhen Wang
- Department of Gastroenterology, Second People's Hospital, Changzhi, Shanxi 046000, China.
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510335, China.
| | - Minghua Du
- Department of Emergency, PLA General Hospital, Beijing 100853, China.
| |
Collapse
|
28
|
Mozaffari J, Amirkhani A, Shokouhi SB. ColonGen: an efficient polyp segmentation system for generalization improvement using a new comprehensive dataset. Phys Eng Sci Med 2024; 47:309-325. [PMID: 38224384 DOI: 10.1007/s13246-023-01368-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 12/06/2023] [Indexed: 01/16/2024]
Abstract
Colorectal cancer (CRC) is one of the most common causes of cancer-related deaths. While polyp detection is important for diagnosing CRC, high miss rates for polyps have been reported during colonoscopy. Most deep learning methods extract features from images using convolutional neural networks (CNNs). In recent years, vision transformer (ViT) models have been employed for image processing and have been successful in image segmentation. It is possible to improve image processing by using transformer models that can extract spatial location information, and CNNs that are capable of aggregating local information. Despite this, recent research shows limited effectiveness in increasing data diversity and generalization accuracy. This paper investigates the generalization proficiency of polyp image segmentation based on transformer architecture and proposes a novel approach using two different ViT architectures. This allows the model to learn representations from different perspectives, which can then be combined to create a richer feature representation. Additionally, a more universal and comprehensive dataset has been derived from the datasets presented in the related research, which can be used for improving generalizations. We first evaluated the generalization of our proposed model using three distinct training-testing scenarios. Our experimental results demonstrate that our ColonGen-V1 outperforms other state-of-the-art methods in all scenarios. As a next step, we used the comprehensive dataset for improving the performance of the model against in- and out-of-domain data. The results show that our ColonGen-V2 outperforms state-of-the-art studies by 5.1%, 1.3%, and 1.1% in ETIS-Larib, Kvasir-Seg, and CVC-ColonDB datasets, respectively. The inclusive dataset and the model introduced in this paper are available to the public through this link: https://github.com/javadmozaffari/Polyp_segmentation .
Collapse
Affiliation(s)
- Javad Mozaffari
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, 16846-13114, Iran
| | - Abdollah Amirkhani
- School of Automotive Engineering, Iran University of Science and Technology, Tehran, 16846-13114, Iran.
| | - Shahriar B Shokouhi
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, 16846-13114, Iran
| |
Collapse
|
29
|
Khan ZF, Ramzan M, Raza M, Khan MA, Alasiry A, Marzougui M, Shin J. Real-Time Polyp Detection From Endoscopic Images Using YOLOv8 With YOLO-Score Metrics for Enhanced Suitability Assessment. IEEE ACCESS 2024; 12:176346-176362. [DOI: 10.1109/access.2024.3505619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2025]
Affiliation(s)
- Zahid Farooq Khan
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Muhammad Ramzan
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Muhammad Attique Khan
- Department of AI, College of Computer Engineering and Science, Prince Mohammad Bin Fahd University, Al Khobar, Saudi Arabia
| | - Areej Alasiry
- College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | - Mehrez Marzougui
- College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | - Jungpil Shin
- School of Computer Science and Engineering, The University of Aizu, Aizuwakamatsu, Japan
| |
Collapse
|
30
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
31
|
Zhu S, Gao J, Liu L, Yin M, Lin J, Xu C, Xu C, Zhu J. Public Imaging Datasets of Gastrointestinal Endoscopy for Artificial Intelligence: a Review. J Digit Imaging 2023; 36:2578-2601. [PMID: 37735308 PMCID: PMC10584770 DOI: 10.1007/s10278-023-00844-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 05/03/2023] [Accepted: 05/03/2023] [Indexed: 09/23/2023] Open
Abstract
With the advances in endoscopic technologies and artificial intelligence, a large number of endoscopic imaging datasets have been made public to researchers around the world. This study aims to review and introduce these datasets. An extensive literature search was conducted to identify appropriate datasets in PubMed, and other targeted searches were conducted in GitHub, Kaggle, and Simula to identify datasets directly. We provided a brief introduction to each dataset and evaluated the characteristics of the datasets included. Moreover, two national datasets in progress were discussed. A total of 40 datasets of endoscopic images were included, of which 34 were accessible for use. Basic and detailed information on each dataset was reported. Of all the datasets, 16 focus on polyps, and 6 focus on small bowel lesions. Most datasets (n = 16) were constructed by colonoscopy only, followed by normal gastrointestinal endoscopy and capsule endoscopy (n = 9). This review may facilitate the usage of public dataset resources in endoscopic research.
Collapse
Affiliation(s)
- Shiqi Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Jingwen Gao
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Lu Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Minyue Yin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Chang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Chunfang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China.
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China.
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China.
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China.
| |
Collapse
|
32
|
Lazo JF, Rosa B, Catellani M, Fontana M, Mistretta FA, Musi G, de Cobelli O, de Mathelin M, De Momi E. Semi-Supervised Bladder Tissue Classification in Multi-Domain Endoscopic Images. IEEE Trans Biomed Eng 2023; 70:2822-2833. [PMID: 37037233 DOI: 10.1109/tbme.2023.3265679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2023]
Abstract
OBJECTIVE Accurate visual classification of bladder tissue during Trans-Urethral Resection of Bladder Tumor (TURBT) procedures is essential to improve early cancer diagnosis and treatment. During TURBT interventions, White Light Imaging (WLI) and Narrow Band Imaging (NBI) techniques are used for lesion detection. Each imaging technique provides diverse visual information that allows clinicians to identify and classify cancerous lesions. Computer vision methods that use both imaging techniques could improve endoscopic diagnosis. We address the challenge of tissue classification when annotations are available only in one domain, in our case WLI, and the endoscopic images correspond to an unpaired dataset, i.e. there is no exact equivalent for every image in both NBI and WLI domains. METHOD We propose a semi-surprised Generative Adversarial Network (GAN)-based method composed of three main components: a teacher network trained on the labeled WLI data; a cycle-consistency GAN to perform unpaired image-to-image translation, and a multi-input student network. To ensure the quality of the synthetic images generated by the proposed GAN we perform a detailed quantitative, and qualitative analysis with the help of specialists. CONCLUSION The overall average classification accuracy, precision, and recall obtained with the proposed method for tissue classification are 0.90, 0.88, and 0.89 respectively, while the same metrics obtained in the unlabeled domain (NBI) are 0.92, 0.64, and 0.94 respectively. The quality of the generated images is reliable enough to deceive specialists. SIGNIFICANCE This study shows the potential of using semi-supervised GAN-based bladder tissue classification when annotations are limited in multi-domain data.
Collapse
|
33
|
Bian H, Jiang M, Qian J. The investigation of constraints in implementing robust AI colorectal polyp detection for sustainable healthcare system. PLoS One 2023; 18:e0288376. [PMID: 37437026 DOI: 10.1371/journal.pone.0288376] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 06/24/2023] [Indexed: 07/14/2023] Open
Abstract
Colorectal cancer (CRC) is one of the significant threats to public health and the sustainable healthcare system during urbanization. As the primary method of screening, colonoscopy can effectively detect polyps before they evolve into cancerous growths. However, the current visual inspection by endoscopists is insufficient in providing consistently reliable polyp detection for colonoscopy videos and images in CRC screening. Artificial Intelligent (AI) based object detection is considered as a potent solution to overcome visual inspection limitations and mitigate human errors in colonoscopy. This study implemented a YOLOv5 object detection model to investigate the performance of mainstream one-stage approaches in colorectal polyp detection. Meanwhile, a variety of training datasets and model structure configurations are employed to identify the determinative factors in practical applications. The designed experiments show that the model yields acceptable results assisted by transfer learning, and highlight that the primary constraint in implementing deep learning polyp detection comes from the scarcity of training data. The model performance was improved by 15.6% in terms of average precision (AP) when the original training dataset was expanded. Furthermore, the experimental results were analysed from a clinical perspective to identify potential causes of false positives. Besides, the quality management framework is proposed for future dataset preparation and model development in AI-driven polyp detection tasks for smart healthcare solutions.
Collapse
Affiliation(s)
- Haitao Bian
- College of Safety Science and Engineering, Nanjing Tech University, Nanjing, Jiangsu, China
| | - Min Jiang
- KLA Corporation, Milpitas, California, United States of America
| | - Jingjing Qian
- Department of Gastroenterology, The Second Hospital of Nanjing, Nanjing University of Chinese Medicine, Nanjing, Jiangsu, China
| |
Collapse
|
34
|
Kader R, Cid‐Mejias A, Brandao P, Islam S, Hebbar S, Puyal JG, Ahmad OF, Hussein M, Toth D, Mountney P, Seward E, Vega R, Stoyanov D, Lovat LB. Polyp characterization using deep learning and a publicly accessible polyp video database. Dig Endosc 2023; 35:645-655. [PMID: 36527309 PMCID: PMC10570984 DOI: 10.1111/den.14500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 12/13/2022] [Indexed: 01/20/2023]
Abstract
OBJECTIVES Convolutional neural networks (CNN) for computer-aided diagnosis of polyps are often trained using high-quality still images in a single chromoendoscopy imaging modality with sessile serrated lesions (SSLs) often excluded. This study developed a CNN from videos to classify polyps as adenomatous or nonadenomatous using standard narrow-band imaging (NBI) and NBI-near focus (NBI-NF) and created a publicly accessible polyp video database. METHODS We trained a CNN with 16,832 high and moderate quality frames from 229 polyp videos (56 SSLs). It was evaluated with 222 polyp videos (36 SSLs) across two test-sets. Test-set I consists of 14,320 frames (157 polyps, 111 diminutive). Test-set II, which is publicly accessible, 3317 video frames (65 polyps, 41 diminutive), which was benchmarked with three expert and three nonexpert endoscopists. RESULTS Sensitivity for adenoma characterization was 91.6% in test-set I and 89.7% in test-set II. Specificity was 91.9% and 88.5%. Sensitivity for diminutive polyps was 89.9% and 87.5%; specificity 90.5% and 88.2%. In NBI-NF, sensitivity was 89.4% and 89.5%, with a specificity of 94.7% and 83.3%. In NBI, sensitivity was 85.3% and 91.7%, with a specificity of 87.5% and 90.0%, respectively. The CNN achieved preservation and incorporation of valuable endoscopic innovations (PIVI)-1 and PIVI-2 thresholds for each test-set. In the benchmarking of test-set II, the CNN was significantly more accurate than nonexperts (13.8% difference [95% confidence interval 3.2-23.6], P = 0.01) with no significant difference with experts. CONCLUSIONS A single CNN can differentiate adenomas from SSLs and hyperplastic polyps in both NBI and NBI-NF. A publicly accessible NBI polyp video database was created and benchmarked.
Collapse
Affiliation(s)
- Rawen Kader
- Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Division of Surgery and Interventional SciencesUniversity College LondonLondonUK
- Gastrointestinal ServicesUniversity College London HospitalLondonUK
| | | | | | - Shahraz Islam
- Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Division of Surgery and Interventional SciencesUniversity College LondonLondonUK
| | | | - Juana González‐Bueno Puyal
- Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Odin Vision LtdLondonUK
| | - Omer F. Ahmad
- Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Division of Surgery and Interventional SciencesUniversity College LondonLondonUK
- Gastrointestinal ServicesUniversity College London HospitalLondonUK
| | - Mohamed Hussein
- Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Division of Surgery and Interventional SciencesUniversity College LondonLondonUK
- Gastrointestinal ServicesUniversity College London HospitalLondonUK
| | | | | | - Ed Seward
- Division of Surgery and Interventional SciencesUniversity College LondonLondonUK
- Gastrointestinal ServicesUniversity College London HospitalLondonUK
| | - Roser Vega
- Division of Surgery and Interventional SciencesUniversity College LondonLondonUK
- Gastrointestinal ServicesUniversity College London HospitalLondonUK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
| | - Laurence B. Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Division of Surgery and Interventional SciencesUniversity College LondonLondonUK
- Gastrointestinal ServicesUniversity College London HospitalLondonUK
| |
Collapse
|
35
|
Mahajan N, Holzwanger E, Brown JG, Berzin TM. Deploying automated machine learning for computer vision projects: a brief introduction for endoscopists. VIDEOGIE : AN OFFICIAL VIDEO JOURNAL OF THE AMERICAN SOCIETY FOR GASTROINTESTINAL ENDOSCOPY 2023; 8:249-251. [PMID: 37303708 PMCID: PMC10251677 DOI: 10.1016/j.vgie.2023.02.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Video 1Deploying automated machine learning for computer vision projects: a brief introduction for endoscopists.
Collapse
Affiliation(s)
- Neal Mahajan
- Center for Advanced Endoscopy, Division of Gastroenterology, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts
| | - Erik Holzwanger
- Center for Advanced Endoscopy, Division of Gastroenterology, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts
| | - Jeremy Glissen Brown
- Division of Gastroenterology, Duke University Medical Center, Durham, North Carolina
| | - Tyler M Berzin
- Center for Advanced Endoscopy, Division of Gastroenterology, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
36
|
Krenzer A, Heil S, Fitting D, Matti S, Zoller WG, Hann A, Puppe F. Automated classification of polyps using deep learning architectures and few-shot learning. BMC Med Imaging 2023; 23:59. [PMID: 37081495 PMCID: PMC10120204 DOI: 10.1186/s12880-023-01007-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 03/24/2023] [Indexed: 04/22/2023] Open
Abstract
BACKGROUND Colorectal cancer is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is a colonoscopy. However, not all colon polyps have the risk of becoming cancerous. Therefore, polyps are classified using different classification systems. After the classification, further treatment and procedures are based on the classification of the polyp. Nevertheless, classification is not easy. Therefore, we suggest two novel automated classifications system assisting gastroenterologists in classifying polyps based on the NICE and Paris classification. METHODS We build two classification systems. One is classifying polyps based on their shape (Paris). The other classifies polyps based on their texture and surface patterns (NICE). A two-step process for the Paris classification is introduced: First, detecting and cropping the polyp on the image, and secondly, classifying the polyp based on the cropped area with a transformer network. For the NICE classification, we design a few-shot learning algorithm based on the Deep Metric Learning approach. The algorithm creates an embedding space for polyps, which allows classification from a few examples to account for the data scarcity of NICE annotated images in our database. RESULTS For the Paris classification, we achieve an accuracy of 89.35 %, surpassing all papers in the literature and establishing a new state-of-the-art and baseline accuracy for other publications on a public data set. For the NICE classification, we achieve a competitive accuracy of 81.13 % and demonstrate thereby the viability of the few-shot learning paradigm in polyp classification in data-scarce environments. Additionally, we show different ablations of the algorithms. Finally, we further elaborate on the explainability of the system by showing heat maps of the neural network explaining neural activations. CONCLUSION Overall we introduce two polyp classification systems to assist gastroenterologists. We achieve state-of-the-art performance in the Paris classification and demonstrate the viability of the few-shot learning paradigm in the NICE classification, addressing the prevalent data scarcity issues faced in medical machine learning.
Collapse
Affiliation(s)
- Adrian Krenzer
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070, Würzburg, Germany.
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080, Würzburg, Germany.
| | - Stefan Heil
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070, Würzburg, Germany
| | - Daniel Fitting
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080, Würzburg, Germany
| | - Safa Matti
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070, Würzburg, Germany
| | - Wolfram G Zoller
- Department of Internal Medicine and Gastroenterology, Katharinenhospital, Kriegsbergstrasse 60, 70174, Stuttgart, Germany
| | - Alexander Hann
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080, Würzburg, Germany
| | - Frank Puppe
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070, Würzburg, Germany
| |
Collapse
|
37
|
Mori Y, Wang P, Løberg M, Misawa M, Repici A, Spadaccini M, Correale L, Antonelli G, Yu H, Gong D, Ishiyama M, Kudo SE, Kamba S, Sumiyama K, Saito Y, Nishino H, Liu P, Glissen Brown JR, Mansour NM, Gross SA, Kalager M, Bretthauer M, Rex DK, Sharma P, Berzin TM, Hassan C. Impact of Artificial Intelligence on Colonoscopy Surveillance After Polyp Removal: A Pooled Analysis of Randomized Trials. Clin Gastroenterol Hepatol 2023; 21:949-959.e2. [PMID: 36038128 DOI: 10.1016/j.cgh.2022.08.022] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Revised: 08/08/2022] [Accepted: 08/11/2022] [Indexed: 02/07/2023]
Abstract
BACKGROUND AND AIMS Artificial intelligence (AI) tools aimed at improving polyp detection have been shown to increase the adenoma detection rate during colonoscopy. However, it is unknown how increased polyp detection rates by AI affect the burden of patient surveillance after polyp removal. METHODS We conducted a pooled analysis of 9 randomized controlled trials (5 in China, 2 in Italy, 1 in Japan, and 1 in the United States) comparing colonoscopy with or without AI detection aids. The primary outcome was the proportion of patients recommended to undergo intensive surveillance (ie, 3-year interval). We analyzed intervals for AI and non-AI colonoscopies for the U.S. and European recommendations separately. We estimated proportions by calculating relative risks using the Mantel-Haenszel method. RESULTS A total of 5796 patients (51% male, mean 53 years of age) were included; 2894 underwent AI-assisted colonoscopy and 2902 non-AI colonoscopy. When following U.S. guidelines, the proportion of patients recommended intensive surveillance increased from 8.4% (95% CI, 7.4%-9.5%) in the non-AI group to 11.3% (95% CI, 10.2%-12.6%) in the AI group (absolute difference, 2.9% [95% CI, 1.4%-4.4%]; risk ratio, 1.35 [95% CI, 1.16-1.57]). When following European guidelines, it increased from 6.1% (95% CI, 5.3%-7.0%) to 7.4% (95% CI, 6.5%-8.4%) (absolute difference, 1.3% [95% CI, 0.01%-2.6%]; risk ratio, 1.22 [95% CI, 1.01-1.47]). CONCLUSIONS The use of AI during colonoscopy increased the proportion of patients requiring intensive colonoscopy surveillance by approximately 35% in the United States and 20% in Europe (absolute increases of 2.9% and 1.3%, respectively). While this may contribute to improved cancer prevention, it significantly adds patient burden and healthcare costs.
Collapse
Affiliation(s)
- Yuichi Mori
- Clinical Effectiveness Research Group, University of Oslo, Oslo, Norway; Department of Transplantation Medicine, Oslo University Hospital, Oslo, Norway; Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan.
| | - Pu Wang
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People's Hospital, Sichuan, China
| | - Magnus Løberg
- Clinical Effectiveness Research Group, University of Oslo, Oslo, Norway; Department of Transplantation Medicine, Oslo University Hospital, Oslo, Norway
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Alessandro Repici
- Endoscopy Unit, Humanitas Clinical and Research Center-IRCCS, Rozzano, Italy; Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Italy
| | - Marco Spadaccini
- Endoscopy Unit, Humanitas Clinical and Research Center-IRCCS, Rozzano, Italy
| | - Loredana Correale
- Endoscopy Unit, Humanitas Clinical and Research Center-IRCCS, Rozzano, Italy
| | - Giulio Antonelli
- Gastroenterology and Digestive Endoscopy Unit, Ospedale dei Castelli Hospital, Ariccia, Rome, Italy; Department of Anatomical, Histological, Forensic Medicine and Orthopedics Sciences, Sapienza University of Rome, Italy
| | - Honggang Yu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China; Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Wuhan University Renmin Hospital, Wuhan, China
| | - Dexin Gong
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China; Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Wuhan University Renmin Hospital, Wuhan, China
| | - Misaki Ishiyama
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Shunsuke Kamba
- Department of Endoscopy, the Jikei University School of Medicine, Tokyo, Japan
| | - Kazuki Sumiyama
- Department of Endoscopy, the Jikei University School of Medicine, Tokyo, Japan
| | - Yutaka Saito
- Endoscopy Division, National Cancer Center Hospital, Tokyo, Japan
| | - Haruo Nishino
- Coloproctology Center, Matsushima Hospital, Yokohama, Japan
| | - Peixi Liu
- Department of Gastroenterology, Sichuan Academy of Medical Sciences & Sichuan Provincial People's Hospital, Sichuan, China
| | | | - Nabil M Mansour
- Section of Gastroenterology and Hepatology, Baylor College of Medicine, Houston, Texas
| | - Seth A Gross
- Division of Gastroenterology and Hepatology, NYU Langone Health, New York, New York
| | - Mette Kalager
- Clinical Effectiveness Research Group, University of Oslo, Oslo, Norway; Department of Transplantation Medicine, Oslo University Hospital, Oslo, Norway
| | - Michael Bretthauer
- Clinical Effectiveness Research Group, University of Oslo, Oslo, Norway; Department of Transplantation Medicine, Oslo University Hospital, Oslo, Norway
| | - Douglas K Rex
- Division of Gastroenterology/Hepatology, Indiana University School of Medicine, Indianapolis, Indiana
| | - Prateek Sharma
- Department of Gastroenterology and Hepatology, Kansas City VA Medical Center and University of Kansas School of Medicine, Kansas City, Kansas
| | - Tyler M Berzin
- Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts
| | - Cesare Hassan
- Endoscopy Unit, Humanitas Clinical and Research Center-IRCCS, Rozzano, Italy; Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Italy
| |
Collapse
|
38
|
Dhaliwal J, Walsh CM. Artificial Intelligence in Pediatric Endoscopy: Current Status and Future Applications. Gastrointest Endosc Clin N Am 2023; 33:291-308. [PMID: 36948747 DOI: 10.1016/j.giec.2022.12.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2023]
Abstract
The application of artificial intelligence (AI) has great promise for improving pediatric endoscopy. The majority of preclinical studies have been undertaken in adults, with the greatest progress being made in the context of colorectal cancer screening and surveillance. This development has only been possible with advances in deep learning, like the convolutional neural network model, which has enabled real-time detection of pathology. Comparatively, the majority of deep learning systems developed in inflammatory bowel disease have focused on predicting disease severity and were developed using still images rather than videos. The application of AI to pediatric endoscopy is in its infancy, thus providing an opportunity to develop clinically meaningful and fair systems that do not perpetuate societal biases. In this review, we provide an overview of AI, summarize the advances of AI in endoscopy, and describe its potential application to pediatric endoscopic practice and education.
Collapse
Affiliation(s)
- Jasbir Dhaliwal
- Division of Pediatric Gastroenterology, Hepatology and Nutrition, Cincinnati Children's Hospital Medictal Center, University of Cincinnati, OH, USA.
| | - Catharine M Walsh
- Division of Gastroenterology, Hepatology, and Nutrition, and the SickKids Research and Learning Institutes, The Hospital for Sick Children, Toronto, ON, Canada; Department of Paediatrics and The Wilson Centre, University of Toronto, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
39
|
Kamba S, Sumiyama K. Benchmark test for the characterization of colorectal polyps using a computer-aided diagnosis with a publicly accessible database. Dig Endosc 2023. [PMID: 36944582 DOI: 10.1111/den.14540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 02/20/2023] [Indexed: 03/23/2023]
Affiliation(s)
- Shunsuke Kamba
- Department of Endoscopy, The Jikei University School of Medicine, Tokyo, Japan
- Developmental Endoscopy Unit, Gastroenterology and Hepatology, Mayo Clinic, Rochester, USA
| | - Kazuki Sumiyama
- Department of Endoscopy, The Jikei University School of Medicine, Tokyo, Japan
| |
Collapse
|
40
|
Jiang K, Itoh H, Oda M, Okumura T, Mori Y, Misawa M, Hayashi T, Kudo SE, Mori K. Gaussian affinity and GIoU-based loss for perforation detection and localization from colonoscopy videos. Int J Comput Assist Radiol Surg 2023; 18:795-805. [PMID: 36913126 DOI: 10.1007/s11548-022-02821-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 12/20/2022] [Indexed: 03/14/2023]
Abstract
PURPOSE Endoscopic submucosal dissection (ESD) is a minimally invasive treatment for early gastric cancer. However, perforations may happen and cause peritonitis during ESD. Thus, there is a potential demand for a computer-aided diagnosis system to support physicians in ESD. This paper presents a method to detect and localize perforations from colonoscopy videos to avoid perforation ignoring or enlarging by ESD physicians. METHOD We proposed a training method for YOLOv3 by using GIoU and Gaussian affinity losses for perforation detection and localization in colonoscopic images. In this method, the object functional contains the generalized intersection over Union loss and Gaussian affinity loss. We propose a training method for the architecture of YOLOv3 with the presented loss functional to detect and localize perforations precisely. RESULTS To qualitatively and quantitatively evaluate the presented method, we created a dataset from 49 ESD videos. The results of the presented method on our dataset revealed a state-of-the-art performance of perforation detection and localization, which achieved 0.881 accuracy, 0.869 AUC, and 0.879 mean average precision. Furthermore, the presented method is able to detect a newly appeared perforation in 0.1 s. CONCLUSIONS The experimental results demonstrated that YOLOv3 trained by the presented loss functional were very effective in perforation detection and localization. The presented method can quickly and precisely remind physicians of perforation happening in ESD. We believe a future CAD system can be constructed for clinical applications with the proposed method.
Collapse
Affiliation(s)
- Kai Jiang
- Information Strategy Office, Information and Communications, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan.
| | - Hayato Itoh
- Information Strategy Office, Information and Communications, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
| | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan.,Information Strategy Office, Information and Communications, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
| | - Taishi Okumura
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Chigasaki-chuo 35-1, Tsuzuki-ku, Yokohama, 224-8503, Japan
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Chigasaki-chuo 35-1, Tsuzuki-ku, Yokohama, 224-8503, Japan.,Clinical Effectiveness Research Group, University of Oslo Gaustad Sykehus, Bygg 20, Sognsvannsveien 21, Oslo, 0372, Norway
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Chigasaki-chuo 35-1, Tsuzuki-ku, Yokohama, 224-8503, Japan
| | - Takemasa Hayashi
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Chigasaki-chuo 35-1, Tsuzuki-ku, Yokohama, 224-8503, Japan
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Chigasaki-chuo 35-1, Tsuzuki-ku, Yokohama, 224-8503, Japan
| | - Kensaku Mori
- Information Strategy Office, Information and Communications, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan. .,Information Technology Center, Nagoya University, Nagoya, Japan. .,Research Center for Medical Bigdata, National Institute of Informatics, Tokyo, Japan.
| |
Collapse
|
41
|
Nogueira-Rodríguez A, Glez-Peña D, Reboiro-Jato M, López-Fernández H. Negative Samples for Improving Object Detection-A Case Study in AI-Assisted Colonoscopy for Polyp Detection. Diagnostics (Basel) 2023; 13:diagnostics13050966. [PMID: 36900110 PMCID: PMC10001273 DOI: 10.3390/diagnostics13050966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 03/01/2023] [Indexed: 03/08/2023] Open
Abstract
Deep learning object-detection models are being successfully applied to develop computer-aided diagnosis systems for aiding polyp detection during colonoscopies. Here, we evidence the need to include negative samples for both (i) reducing false positives during the polyp-finding phase, by including images with artifacts that may confuse the detection models (e.g., medical instruments, water jets, feces, blood, excessive proximity of the camera to the colon wall, blurred images, etc.) that are usually not included in model development datasets, and (ii) correctly estimating a more realistic performance of the models. By retraining our previously developed YOLOv3-based detection model with a dataset that includes 15% of additional not-polyp images with a variety of artifacts, we were able to generally improve its F1 performance in our internal test datasets (from an average F1 of 0.869 to 0.893), which now include such type of images, as well as in four public datasets that include not-polyp images (from an average F1 of 0.695 to 0.722).
Collapse
Affiliation(s)
- Alba Nogueira-Rodríguez
- CINBIO, Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
- Correspondence:
| | - Daniel Glez-Peña
- CINBIO, Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| | - Miguel Reboiro-Jato
- CINBIO, Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| | - Hugo López-Fernández
- CINBIO, Department of Computer Science, ESEI—Escuela Superior de Ingeniería Informática, Universidade de Vigo, 32004 Ourense, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36213 Vigo, Spain
| |
Collapse
|
42
|
Souaidi M, Lafraxo S, Kerkaou Z, El Ansari M, Koutti L. A Multiscale Polyp Detection Approach for GI Tract Images Based on Improved DenseNet and Single-Shot Multibox Detector. Diagnostics (Basel) 2023; 13:diagnostics13040733. [PMID: 36832221 PMCID: PMC9955440 DOI: 10.3390/diagnostics13040733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 02/07/2023] [Accepted: 02/09/2023] [Indexed: 02/17/2023] Open
Abstract
Small bowel polyps exhibit variations related to color, shape, morphology, texture, and size, as well as to the presence of artifacts, irregular polyp borders, and the low illumination condition inside the gastrointestinal GI tract. Recently, researchers developed many highly accurate polyp detection models based on one-stage or two-stage object detector algorithms for wireless capsule endoscopy (WCE) and colonoscopy images. However, their implementation requires a high computational power and memory resources, thus sacrificing speed for an improvement in precision. Although the single-shot multibox detector (SSD) proves its effectiveness in many medical imaging applications, its weak detection ability for small polyp regions persists due to the lack of information complementary between features of low- and high-level layers. The aim is to consecutively reuse feature maps between layers of the original SSD network. In this paper, we propose an innovative SSD model based on a redesigned version of a dense convolutional network (DenseNet) which emphasizes multiscale pyramidal feature maps interdependence called DC-SSDNet (densely connected single-shot multibox detector). The original backbone network VGG-16 of the SSD is replaced with a modified version of DenseNet. The DenseNet-46 front stem is improved to extract highly typical characteristics and contextual information, which improves the model's feature extraction ability. The DC-SSDNet architecture compresses unnecessary convolution layers of each dense block to reduce the CNN model complexity. Experimental results showed a remarkable improvement in the proposed DC-SSDNet to detect small polyp regions achieving an mAP of 93.96%, F1-score of 90.7%, and requiring less computational time.
Collapse
Affiliation(s)
- Meryem Souaidi
- LABSIV, Computer Science, Faculty of Sciences, University Ibn Zohr, Agadir 80000, Morocco
- Correspondence:
| | - Samira Lafraxo
- LABSIV, Computer Science, Faculty of Sciences, University Ibn Zohr, Agadir 80000, Morocco
| | - Zakaria Kerkaou
- LABSIV, Computer Science, Faculty of Sciences, University Ibn Zohr, Agadir 80000, Morocco
| | - Mohamed El Ansari
- LABSIV, Computer Science, Faculty of Sciences, University Ibn Zohr, Agadir 80000, Morocco
- Informatics and Applications Laboratory, Computer Science Department, Faculty of Sciences, University of Moulay Ismail, Meknès 50070, Morocco
| | - Lahcen Koutti
- LABSIV, Computer Science, Faculty of Sciences, University Ibn Zohr, Agadir 80000, Morocco
| |
Collapse
|
43
|
Diaconu C, State M, Birligea M, Ifrim M, Bajdechi G, Georgescu T, Mateescu B, Voiosu T. The Role of Artificial Intelligence in Monitoring Inflammatory Bowel Disease-The Future Is Now. Diagnostics (Basel) 2023; 13:735. [PMID: 36832222 PMCID: PMC9954871 DOI: 10.3390/diagnostics13040735] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/02/2023] [Accepted: 02/02/2023] [Indexed: 02/17/2023] Open
Abstract
Crohn's disease and ulcerative colitis remain debilitating disorders, characterized by progressive bowel damage and possible lethal complications. The growing number of applications for artificial intelligence in gastrointestinal endoscopy has already shown great potential, especially in the field of neoplastic and pre-neoplastic lesion detection and characterization, and is currently under evaluation in the field of inflammatory bowel disease management. The application of artificial intelligence in inflammatory bowel diseases can range from genomic dataset analysis and risk prediction model construction to the disease grading severity and assessment of the response to treatment using machine learning. We aimed to assess the current and future role of artificial intelligence in assessing the key outcomes in inflammatory bowel disease patients: endoscopic activity, mucosal healing, response to treatment, and neoplasia surveillance.
Collapse
Affiliation(s)
- Claudia Diaconu
- Gastroenterology Department, Colentina Clinical Hospital, 020125 Bucharest, Romania
| | - Monica State
- Gastroenterology Department, Colentina Clinical Hospital, 020125 Bucharest, Romania
- Internal Medicine Department, Carol Davila University of Medicine and Pharmacy, 50474 Bucharest, Romania
| | - Mihaela Birligea
- Gastroenterology Department, Colentina Clinical Hospital, 020125 Bucharest, Romania
| | - Madalina Ifrim
- Gastroenterology Department, Colentina Clinical Hospital, 020125 Bucharest, Romania
| | - Georgiana Bajdechi
- Gastroenterology Department, Colentina Clinical Hospital, 020125 Bucharest, Romania
| | - Teodora Georgescu
- Gastroenterology Department, Colentina Clinical Hospital, 020125 Bucharest, Romania
| | - Bogdan Mateescu
- Gastroenterology Department, Colentina Clinical Hospital, 020125 Bucharest, Romania
- Internal Medicine Department, Carol Davila University of Medicine and Pharmacy, 50474 Bucharest, Romania
| | - Theodor Voiosu
- Gastroenterology Department, Colentina Clinical Hospital, 020125 Bucharest, Romania
- Internal Medicine Department, Carol Davila University of Medicine and Pharmacy, 50474 Bucharest, Romania
| |
Collapse
|
44
|
Houwen BBSL, Nass KJ, Vleugels JLA, Fockens P, Hazewinkel Y, Dekker E. Comprehensive review of publicly available colonoscopic imaging databases for artificial intelligence research: availability, accessibility, and usability. Gastrointest Endosc 2023; 97:184-199.e16. [PMID: 36084720 DOI: 10.1016/j.gie.2022.08.043] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/24/2022] [Accepted: 08/30/2022] [Indexed: 01/28/2023]
Abstract
BACKGROUND AND AIMS Publicly available databases containing colonoscopic imaging data are valuable resources for artificial intelligence (AI) research. Currently, little is known regarding the available number and content of these databases. This review aimed to describe the availability, accessibility, and usability of publicly available colonoscopic imaging databases, focusing on polyp detection, polyp characterization, and quality of colonoscopy. METHODS A systematic literature search was performed in MEDLINE and Embase to identify AI studies describing publicly available colonoscopic imaging databases published after 2010. Second, a targeted search using Google's Dataset Search, Google Search, GitHub, and Figshare was done to identify databases directly. Databases were included if they contained data about polyp detection, polyp characterization, or quality of colonoscopy. To assess accessibility of databases, the following categories were defined: open access, open access with barriers, and regulated access. To assess the potential usability of the included databases, essential details of each database were extracted using a checklist derived from the Checklist for Artificial Intelligence in Medical Imaging. RESULTS We identified 22 databases with open access, 3 databases with open access with barriers, and 15 databases with regulated access. The 22 open access databases contained 19,463 images and 952 videos. Nineteen of these databases focused on polyp detection, localization, and/or segmentation; 6 on polyp characterization, and 3 on quality of colonoscopy. Only half of these databases have been used by other researcher to develop, train, or benchmark their AI system. Although technical details were in general well reported, important details such as polyp and patient demographics and the annotation process were under-reported in almost all databases. CONCLUSIONS This review provides greater insight on public availability of colonoscopic imaging databases for AI research. Incomplete reporting of important details limits the ability of researchers to assess the usability of current databases.
Collapse
Affiliation(s)
- Britt B S L Houwen
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Karlijn J Nass
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Jasper L A Vleugels
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Yark Hazewinkel
- Department of Gastroenterology and Hepatology, Radboud University Nijmegen Medical Center, Radboud University of Nijmegen, Nijmegen, the Netherlands
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
45
|
González-Bueno Puyal J, Brandao P, Ahmad OF, Bhatia KK, Toth D, Kader R, Lovat L, Mountney P, Stoyanov D. Spatio-temporal classification for polyp diagnosis. BIOMEDICAL OPTICS EXPRESS 2023; 14:593-607. [PMID: 36874484 PMCID: PMC9979670 DOI: 10.1364/boe.473446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 11/25/2022] [Accepted: 12/06/2022] [Indexed: 06/18/2023]
Abstract
Colonoscopy remains the gold standard investigation for colorectal cancer screening as it offers the opportunity to both detect and resect pre-cancerous polyps. Computer-aided polyp characterisation can determine which polyps need polypectomy and recent deep learning-based approaches have shown promising results as clinical decision support tools. Yet polyp appearance during a procedure can vary, making automatic predictions unstable. In this paper, we investigate the use of spatio-temporal information to improve the performance of lesions classification as adenoma or non-adenoma. Two methods are implemented showing an increase in performance and robustness during extensive experiments both on internal and openly available benchmark datasets.
Collapse
Affiliation(s)
- Juana González-Bueno Puyal
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
- Odin Vision, London W1W 7TY, UK
| | | | - Omer F. Ahmad
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | | | | | - Rawen Kader
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | - Laurence Lovat
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | | | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| |
Collapse
|
46
|
Krenzer A, Banck M, Makowski K, Hekalo A, Fitting D, Troya J, Sudarevic B, Zoller WG, Hann A, Puppe F. A Real-Time Polyp-Detection System with Clinical Application in Colonoscopy Using Deep Convolutional Neural Networks. J Imaging 2023; 9:jimaging9020026. [PMID: 36826945 PMCID: PMC9967208 DOI: 10.3390/jimaging9020026] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 01/18/2023] [Accepted: 01/19/2023] [Indexed: 01/26/2023] Open
Abstract
Colorectal cancer (CRC) is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is with a colonoscopy. During this procedure, the gastroenterologist searches for polyps. However, there is a potential risk of polyps being missed by the gastroenterologist. Automated detection of polyps helps to assist the gastroenterologist during a colonoscopy. There are already publications examining the problem of polyp detection in the literature. Nevertheless, most of these systems are only used in the research context and are not implemented for clinical application. Therefore, we introduce the first fully open-source automated polyp-detection system scoring best on current benchmark data and implementing it ready for clinical application. To create the polyp-detection system (ENDOMIND-Advanced), we combined our own collected data from different hospitals and practices in Germany with open-source datasets to create a dataset with over 500,000 annotated images. ENDOMIND-Advanced leverages a post-processing technique based on video detection to work in real-time with a stream of images. It is integrated into a prototype ready for application in clinical interventions. We achieve better performance compared to the best system in the literature and score a F1-score of 90.24% on the open-source CVC-VideoClinicDB benchmark.
Collapse
Affiliation(s)
- Adrian Krenzer
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070 Würzburg, Germany
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080 Würzburg, Germany
| | - Michael Banck
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070 Würzburg, Germany
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080 Würzburg, Germany
| | - Kevin Makowski
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070 Würzburg, Germany
| | - Amar Hekalo
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070 Würzburg, Germany
| | - Daniel Fitting
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080 Würzburg, Germany
| | - Joel Troya
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080 Würzburg, Germany
| | - Boban Sudarevic
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080 Würzburg, Germany
- Department of Internal Medicine and Gastroenterology, Katharinenhospital, Kriegsbergstrasse 60, 70174 Stuttgart, Germany
| | - Wolfgang G Zoller
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080 Würzburg, Germany
- Department of Internal Medicine and Gastroenterology, Katharinenhospital, Kriegsbergstrasse 60, 70174 Stuttgart, Germany
| | - Alexander Hann
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080 Würzburg, Germany
| | - Frank Puppe
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070 Würzburg, Germany
| |
Collapse
|
47
|
ELKarazle K, Raman V, Then P, Chua C. Detection of Colorectal Polyps from Colonoscopy Using Machine Learning: A Survey on Modern Techniques. SENSORS (BASEL, SWITZERLAND) 2023; 23:1225. [PMID: 36772263 PMCID: PMC9953705 DOI: 10.3390/s23031225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 01/08/2023] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
Given the increased interest in utilizing artificial intelligence as an assistive tool in the medical sector, colorectal polyp detection and classification using deep learning techniques has been an active area of research in recent years. The motivation for researching this topic is that physicians miss polyps from time to time due to fatigue and lack of experience carrying out the procedure. Unidentified polyps can cause further complications and ultimately lead to colorectal cancer (CRC), one of the leading causes of cancer mortality. Although various techniques have been presented recently, several key issues, such as the lack of enough training data, white light reflection, and blur affect the performance of such methods. This paper presents a survey on recently proposed methods for detecting polyps from colonoscopy. The survey covers benchmark dataset analysis, evaluation metrics, common challenges, standard methods of building polyp detectors and a review of the latest work in the literature. We conclude this paper by providing a precise analysis of the gaps and trends discovered in the reviewed literature for future work.
Collapse
Affiliation(s)
- Khaled ELKarazle
- School of Information and Communication Technologies, Swinburne University of Technology, Sarawak Campus, Kuching 93350, Malaysia
| | - Valliappan Raman
- Department of Artificial Intelligence and Data Science, Coimbatore Institute of Technology, Coimbatore 641014, India
| | - Patrick Then
- School of Information and Communication Technologies, Swinburne University of Technology, Sarawak Campus, Kuching 93350, Malaysia
| | - Caslon Chua
- Department of Computer Science and Software Engineering, Swinburne University of Technology, Melbourne 3122, Australia
| |
Collapse
|
48
|
Young EJ, Rajandran A, Philpott HL, Sathananthan D, Hoile SF, Singh R. Mucosal imaging in colon polyps: New advances and what the future may hold. World J Gastroenterol 2022; 28:6632-6661. [PMID: 36620337 PMCID: PMC9813932 DOI: 10.3748/wjg.v28.i47.6632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Revised: 10/23/2022] [Accepted: 11/22/2022] [Indexed: 12/19/2022] Open
Abstract
An expanding range of advanced mucosal imaging technologies have been developed with the goal of improving the detection and characterization of lesions in the gastrointestinal tract. Many technologies have targeted colorectal neoplasia given the potential for intervention prior to the development of invasive cancer in the setting of widespread surveillance programs. Improvement in adenoma detection reduces miss rates and prevents interval cancer development. Advanced imaging technologies aim to enhance detection without significantly increasing procedural time. Accurate polyp characterisation guides resection techniques for larger polyps, as well as providing the platform for the "resect and discard" and "do not resect" strategies for small and diminutive polyps. This review aims to collate and summarise the evidence regarding these technologies to guide colonoscopic practice in both interventional and non-interventional endoscopists.
Collapse
Affiliation(s)
- Edward John Young
- Department of Gastroenterology, Lyell McEwin Hospital, Northern Adelaide Local Health Network, Elizabeth Vale 5031, South Australia, Australia
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide 5000, South Australia, Australia
| | - Arvinf Rajandran
- Department of Gastroenterology, Lyell McEwin Hospital, Northern Adelaide Local Health Network, Elizabeth Vale 5031, South Australia, Australia
| | - Hamish Lachlan Philpott
- Department of Gastroenterology, Lyell McEwin Hospital, Northern Adelaide Local Health Network, Elizabeth Vale 5031, South Australia, Australia
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide 5000, South Australia, Australia
| | - Dharshan Sathananthan
- Department of Gastroenterology, Lyell McEwin Hospital, Northern Adelaide Local Health Network, Elizabeth Vale 5031, South Australia, Australia
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide 5000, South Australia, Australia
| | - Sophie Fenella Hoile
- Department of Gastroenterology, Lyell McEwin Hospital, Northern Adelaide Local Health Network, Elizabeth Vale 5031, South Australia, Australia
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide 5000, South Australia, Australia
| | - Rajvinder Singh
- Department of Gastroenterology, Lyell McEwin Hospital, Northern Adelaide Local Health Network, Elizabeth Vale 5031, South Australia, Australia
- Faculty of Health and Medical Sciences, University of Adelaide, Adelaide 5000, South Australia, Australia
| |
Collapse
|
49
|
Shao L, Yan X, Liu C, Guo C, Cai B. Effects of ai-assisted colonoscopy on adenoma miss rate/adenoma detection rate: A protocol for systematic review and meta-analysis. Medicine (Baltimore) 2022; 101:e31945. [PMID: 36401456 PMCID: PMC9678521 DOI: 10.1097/md.0000000000031945] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 10/31/2022] [Indexed: 12/02/2022] Open
Abstract
BACKGROUND Colonoscopy can detect colorectal adenomas and reduce the incidence of colorectal cancer, but there are still many missing diagnoses. Artificial intelligence-assisted colonoscopy (AIAC) can effectively reduce the rate of missed diagnosis and improve the detection rate of adenoma, but its clinical application is still unclear. This systematic review and meta-analysis assessed the adenoma missed detection rate (AMR) and the adenoma detection rate (ADR) by artificial colonoscopy. METHODS Conduct a comprehensive literature search using the PubMed, Medline database, Embase, and the Cochrane Library. This meta-analysis followed the direction of the preferred reporting items for systematic reviews and meta-analyses, the preferred reporting item for systematic review and meta-analysis. The random effect model was used for meta-analysis. RESULTS A total of 12 articles were eventually included in the study. Computer aided detection (CADe) significantly decreased AMR compared with the control group (137/1039, 13.2% vs 304/1054, 28.8%; OR,0.39; 95% CI, 0.26-0.59; P < .05). Similarly, there was statistically significant difference in ADR between the CADe group and control group, respectively (1835/5041, 36.4% vs 1309/4553, 28.7%; OR, 1.54; 95% CI, 1.39-1.71; P < .05). The advanced adenomas missed rate and detection rate in CADe group was not statistically significant when compared with the control group. CONCLUSIONS AIAC can effectively reduce AMR and improve ADR, especially small adenomas. Therefore, this method is worthy of clinical application. However, due to the limitations of the number and quality of the included studies, more in-depth studies are needed in the field of AIAC in the future.
Collapse
Affiliation(s)
- Lei Shao
- Department of Gastrointestinal Oncology, Affiliated Hospital of Qinghai University, Xining, Qinghai, China
| | - Xinzong Yan
- Basic Laboratory of Medical College, Qinghai University, Xining, Qinghai, China
| | - Chengjiang Liu
- Department of Gastroenterology, Anhui Medical University, He Fei, China
| | - Can Guo
- Department of Gastrointestinal Oncology, Affiliated Hospital of Qinghai University, Xining, Qinghai, China
| | - Baojia Cai
- Department of Gastrointestinal Oncology, Affiliated Hospital of Qinghai University, Xining, Qinghai, China
| |
Collapse
|
50
|
Fitting D, Krenzer A, Troya J, Banck M, Sudarevic B, Brand M, Böck W, Zoller WG, Rösch T, Puppe F, Meining A, Hann A. A video based benchmark data set (ENDOTEST) to evaluate computer-aided polyp detection systems. Scand J Gastroenterol 2022; 57:1397-1403. [PMID: 35701020 DOI: 10.1080/00365521.2022.2085059] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
BACKGROUND AND AIMS Computer-aided polyp detection (CADe) may become a standard for polyp detection during colonoscopy. Several systems are already commercially available. We report on a video-based benchmark technique for the first preclinical assessment of such systems before comparative randomized trials are to be undertaken. Additionally, we compare a commercially available CADe system with our newly developed one. METHODS ENDOTEST consisted in the combination of two datasets. The validation dataset contained 48 video-snippets with 22,856 manually annotated images of which 53.2% contained polyps. The performance dataset contained 10 full-length screening colonoscopies with 230,898 manually annotated images of which 15.8% contained a polyp. Assessment parameters were accuracy for polyp detection and time delay to first polyp detection after polyp appearance (FDT). Two CADe systems were assessed: a commercial CADe system (GI-Genius, Medtronic), and a self-developed new system (ENDOMIND). The latter being a convolutional neuronal network trained on 194,983 manually labeled images extracted from colonoscopy videos recorded in mainly six different gastroenterologic practices. RESULTS On the ENDOTEST, both CADe systems detected all polyps in at least one image. The per-frame sensitivity and specificity in full colonoscopies was 48.1% and 93.7%, respectively for GI-Genius; and 54% and 92.7%, respectively for ENDOMIND. Median FDT of ENDOMIND with 217 ms (Inter-Quartile Range(IQR)8-1533) was significantly faster than GI-Genius with 1050 ms (IQR 358-2767, p = 0.003). CONCLUSIONS Our benchmark ENDOTEST may be helpful for preclinical testing of new CADe devices. There seems to be a correlation between a shorter FDT with a higher sensitivity and a lower specificity for polyp detection.
Collapse
Affiliation(s)
- Daniel Fitting
- Interventional and Experimental Endoscopy (InExEn), Internal Medicine II, University Hospital Wuerzburg, Würzburg, Germany
| | - Adrian Krenzer
- Interventional and Experimental Endoscopy (InExEn), Internal Medicine II, University Hospital Wuerzburg, Würzburg, Germany.,Artificial Intelligence and Knowledge Systems, Institute for Computer Science, Julius-Maximilians-Universität, Würzburg, Germany
| | - Joel Troya
- Interventional and Experimental Endoscopy (InExEn), Internal Medicine II, University Hospital Wuerzburg, Würzburg, Germany
| | - Michael Banck
- Interventional and Experimental Endoscopy (InExEn), Internal Medicine II, University Hospital Wuerzburg, Würzburg, Germany.,Artificial Intelligence and Knowledge Systems, Institute for Computer Science, Julius-Maximilians-Universität, Würzburg, Germany
| | - Boban Sudarevic
- Interventional and Experimental Endoscopy (InExEn), Internal Medicine II, University Hospital Wuerzburg, Würzburg, Germany.,Department of Internal Medicine and Gastroenterology, Katharinenhospital, Stuttgart, Germany
| | - Markus Brand
- Interventional and Experimental Endoscopy (InExEn), Internal Medicine II, University Hospital Wuerzburg, Würzburg, Germany
| | | | - Wolfram G Zoller
- Department of Internal Medicine and Gastroenterology, Katharinenhospital, Stuttgart, Germany
| | - Thomas Rösch
- Department of Interdisciplinary Endoscopy, University Hospital Hamburg-Eppendorf, Hamburg, Germany
| | - Frank Puppe
- Artificial Intelligence and Knowledge Systems, Institute for Computer Science, Julius-Maximilians-Universität, Würzburg, Germany
| | - Alexander Meining
- Interventional and Experimental Endoscopy (InExEn), Internal Medicine II, University Hospital Wuerzburg, Würzburg, Germany
| | - Alexander Hann
- Interventional and Experimental Endoscopy (InExEn), Internal Medicine II, University Hospital Wuerzburg, Würzburg, Germany
| |
Collapse
|