1
|
Wang Y, Fan L, Pagnucco M, Song Y. Unsupervised domain adaptation with multi-level distillation boost and adaptive mask for medical image segmentation. Comput Biol Med 2025; 190:110055. [PMID: 40158461 DOI: 10.1016/j.compbiomed.2025.110055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2024] [Revised: 03/17/2025] [Accepted: 03/18/2025] [Indexed: 04/02/2025]
Abstract
The mean-teacher (MT) framework has emerged as a commonly used approach in unsupervised domain adaptation (UDA) tasks. Existing methods primarily focus on aligning the outputs of the student and teacher networks by using guidance from the teacher network's multi-layer features. To build upon the potential of the MT framework, we propose a framework named Multi-Level Distillation Boost (MLDB). It combines Self-Knowledge Distillation and Dual-Directional Knowledge Distillation to align predictions between the intermediate and high-level features of the student and teacher networks. Additionally, considering the complex variability in anatomical structures, foregrounds, and backgrounds across different domains of medical images, we introduce an Adaptive Masked Image Consistency (AMIC) approach. It provides a customized masking strategy to augment images for source and target domain datasets, using varying mask ratios and sizes to improve the adaptability and efficacy of data augmentation. Our experiments on fundus and polyp datasets indicate that the proposed methods achieve competitive performances of 95.2%/86.1% and 97.3%/89.0% Dice scores for optic disc/cup on REFUGE→RIM, REFUGE→Drishti-GS, and 78.3% and 86.2% for polyp on Kvasir→ETIS and Kvasir→Endo, respectively. The code is available at https://github.com/Yongze/MLDB_AMIC.
Collapse
Affiliation(s)
- Yongze Wang
- School of Computer Science and Engineering, University of New South Wales, Australia.
| | - Lei Fan
- School of Computer Science and Engineering, University of New South Wales, Australia.
| | - Maurice Pagnucco
- School of Computer Science and Engineering, University of New South Wales, Australia.
| | - Yang Song
- School of Computer Science and Engineering, University of New South Wales, Australia.
| |
Collapse
|
2
|
Kim Y, Keum JS, Kim JH, Chun J, Oh SI, Kim KN, Yoon YH, Park H. Real-World Colonoscopy Video Integration to Improve Artificial Intelligence Polyp Detection Performance and Reduce Manual Annotation Labor. Diagnostics (Basel) 2025; 15:901. [PMID: 40218251 PMCID: PMC11988911 DOI: 10.3390/diagnostics15070901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2025] [Revised: 03/25/2025] [Accepted: 03/31/2025] [Indexed: 04/14/2025] Open
Abstract
Background/Objectives: Artificial intelligence (AI) integration in colon polyp detection often exhibits high sensitivity but notably low specificity in real-world settings, primarily due to reliance on publicly available datasets alone. To address this limitation, we proposed a semi-automatic annotation method using real colonoscopy videos to enhance AI model performance and reduce manual labeling labor. Methods: An integrated AI model was trained and validated on 86,258 training images and 17,616 validation images. Model 1 utilized only publicly available datasets, while Model 2 additionally incorporated images obtained from real colonoscopy videos of patients through a semi-automatic annotation process, significantly reducing the labeling burden on expert endoscopists. Results: The integrated AI model (Model 2) significantly outperformed the public-dataset-only model (Model 1). At epoch 35, Model 2 achieved a sensitivity of 90.6%, a specificity of 96.0%, an overall accuracy of 94.5%, and an F1 score of 89.9%. All polyps in the test videos were successfully detected, demonstrating considerable enhancement in detection performance compared to the public-dataset-only model. Conclusions: Integrating real-world colonoscopy video data using semi-automatic annotation markedly improved diagnostic accuracy while potentially reducing the need for extensive manual annotation typically performed by expert endoscopists. However, the findings need validation through multicenter external datasets to ensure generalizability.
Collapse
Affiliation(s)
- Yuna Kim
- Department of Internal Medicine, Division of Gastroenterology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul 06273, Republic of Korea; (Y.K.)
| | - Ji-Soo Keum
- Waycen Inc., Seoul 06167, Republic of Korea; (J.-S.K.)
| | - Jie-Hyun Kim
- Department of Internal Medicine, Division of Gastroenterology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul 06273, Republic of Korea; (Y.K.)
| | - Jaeyoung Chun
- Department of Internal Medicine, Division of Gastroenterology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul 06273, Republic of Korea; (Y.K.)
| | - Sang-Il Oh
- Waycen Inc., Seoul 06167, Republic of Korea; (J.-S.K.)
| | - Kyung-Nam Kim
- Waycen Inc., Seoul 06167, Republic of Korea; (J.-S.K.)
| | - Young-Hoon Yoon
- Department of Internal Medicine, Division of Gastroenterology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul 06273, Republic of Korea; (Y.K.)
| | - Hyojin Park
- Department of Internal Medicine, Division of Gastroenterology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul 06273, Republic of Korea; (Y.K.)
| |
Collapse
|
3
|
Ortiz O, Sendino O, Rivadulla S, Garrido A, Neira LM, Sanahuja J, Sesé P, Guardiola M, Fernández-Esparrach G. New Concept of Colonoscopy Assisted by a Microwave-Based Accessory Device: First Clinical Experience. Cancers (Basel) 2025; 17:1073. [PMID: 40227570 PMCID: PMC11988026 DOI: 10.3390/cancers17071073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2025] [Revised: 03/16/2025] [Accepted: 03/20/2025] [Indexed: 04/15/2025] Open
Abstract
Background/Objectives: Colonoscopies have some limitations that result in a miss rate detection of polyps. Microwave imaging has been demonstrated to detect colorectal polyps based on their dielectric properties in synthetic phantoms, ex vivo tissues and in vivo animal models. This study aims to evaluate, for the first time, the feasibility, safety and performance of microwave-based colonoscopy for diagnosis of polyps in real-time explorations in humans. Methods: This was a single-center, prospective, observational study. Patients referred for diagnostic colonoscopy were explored with a device with microwave antennas which was attached to the tip of a standard colonoscope. The primary outcomes were rate of cecal intubation, adverse events, mural injuries and performance metrics for the detection of polyps. Secondary outcomes were the following: patients' subjective feedback, procedural time and perception of difficulty according to the endoscopist. Results: Fifteen patients were enrolled. Cecal intubation rate was 100%, with a mean time of 12.7 ± 4.9 min (range 4-22). Use of the device did not affect the endoscopic image, and polypectomy was successfully performed in all cases. In on scale from zero (not difficult) to four (very difficult), the maneuverability during the insertion was considered ≤2 in the 86.7% (13/15) of colonoscopies. Only 16 incidents were reported in 14 patients: 11 (67%) superficial hematomas, 2 minor rectal bleedings, 1 anal fissure, 1 rhinorrhea and 1 headache. Most of the patients (94%) reported no discomfort or minimal discomfort before discharge (Gloucester score 1 and 2, respectively). In the six patients with 23 polyps used for the performance analysis, the sensitivity and specificity were 86.9% and 72.0%, respectively. Conclusions: microwave-based colonoscopy is safe and feasible and has the potential to detect polyps in real colonoscopies.
Collapse
Affiliation(s)
- Oswaldo Ortiz
- Endoscopy Unit, Hospital Clínic, University of Barcelona (UB), 08036 Barcelona, Spain; (O.O.); (O.S.); (S.R.); (P.S.)
- Instituto de Investigaciones Biomédicas August Pi i Sunyer (IDIBAPS), 08036 Barcelona, Spain
| | - Oriol Sendino
- Endoscopy Unit, Hospital Clínic, University of Barcelona (UB), 08036 Barcelona, Spain; (O.O.); (O.S.); (S.R.); (P.S.)
- Instituto de Investigaciones Biomédicas August Pi i Sunyer (IDIBAPS), 08036 Barcelona, Spain
| | - Silvia Rivadulla
- Endoscopy Unit, Hospital Clínic, University of Barcelona (UB), 08036 Barcelona, Spain; (O.O.); (O.S.); (S.R.); (P.S.)
| | | | - Luz María Neira
- MiWEndo Solutions, 08021 Barcelona, Spain; (A.G.); (L.M.N.); (M.G.)
| | - Josep Sanahuja
- Anesthesiology Department, Hospital Clínic, University of Barcelona (UB), 08036 Barcelona, Spain;
| | - Pilar Sesé
- Endoscopy Unit, Hospital Clínic, University of Barcelona (UB), 08036 Barcelona, Spain; (O.O.); (O.S.); (S.R.); (P.S.)
| | - Marta Guardiola
- MiWEndo Solutions, 08021 Barcelona, Spain; (A.G.); (L.M.N.); (M.G.)
| | - Glòria Fernández-Esparrach
- Endoscopy Unit, Hospital Clínic, University of Barcelona (UB), 08036 Barcelona, Spain; (O.O.); (O.S.); (S.R.); (P.S.)
- Instituto de Investigaciones Biomédicas August Pi i Sunyer (IDIBAPS), 08036 Barcelona, Spain
- MiWEndo Solutions, 08021 Barcelona, Spain; (A.G.); (L.M.N.); (M.G.)
- Facultat de Medicina i Ciències de la Salut, University of Barcelona (UB), 08036 Barcelona, Spain
- Centro de Investigación Biomédica en Red de Enfermedades Hepáticas y Digestivas (CIBEREHD), 08036 Barcelona, Spain
| |
Collapse
|
4
|
Wang KN, Wang H, Zhou GQ, Wang Y, Yang L, Chen Y, Li S. TSdetector: Temporal-Spatial self-correction collaborative learning for colonoscopy video detection. Med Image Anal 2025; 100:103384. [PMID: 39579624 DOI: 10.1016/j.media.2024.103384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 09/24/2024] [Accepted: 10/28/2024] [Indexed: 11/25/2024]
Abstract
CNN-based object detection models that strike a balance between performance and speed have been gradually used in polyp detection tasks. Nevertheless, accurately locating polyps within complex colonoscopy video scenes remains challenging since existing methods ignore two key issues: intra-sequence distribution heterogeneity and precision-confidence discrepancy. To address these challenges, we propose a novel Temporal-Spatial self-correction detector (TSdetector), which first integrates temporal-level consistency learning and spatial-level reliability learning to detect objects continuously. Technically, we first propose a global temporal-aware convolution, assembling the preceding information to dynamically guide the current convolution kernel to focus on global features between sequences. In addition, we designed a hierarchical queue integration mechanism to combine multi-temporal features through a progressive accumulation manner, fully leveraging contextual consistency information together with retaining long-sequence-dependency features. Meanwhile, at the spatial level, we advance a position-aware clustering to explore the spatial relationships among candidate boxes for recalibrating prediction confidence adaptively, thus eliminating redundant bounding boxes efficiently. The experimental results on three publicly available polyp video dataset show that TSdetector achieves the highest polyp detection rate and outperforms other state-of-the-art methods. The code can be available at https://github.com/soleilssss/TSdetector.
Collapse
Affiliation(s)
- Kai-Ni Wang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Haolin Wang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Guang-Quan Zhou
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China.
| | | | - Ling Yang
- Institute of Medical Technology, Peking University Health Science Center, China
| | - Yang Chen
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China; Key Laboratory of Computer Network and Information Integration, Southeast University, Nanjing, China
| | - Shuo Li
- Department of Computer and Data Science and Department of Biomedical Engineering, Case Western Reserve University, USA
| |
Collapse
|
5
|
Faust O, Salvi M, Barua PD, Chakraborty S, Molinari F, Acharya UR. Issues and Limitations on the Road to Fair and Inclusive AI Solutions for Biomedical Challenges. SENSORS (BASEL, SWITZERLAND) 2025; 25:205. [PMID: 39796996 PMCID: PMC11723364 DOI: 10.3390/s25010205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Revised: 12/14/2024] [Accepted: 12/20/2024] [Indexed: 01/13/2025]
Abstract
OBJECTIVE In this paper, we explore the correlation between performance reporting and the development of inclusive AI solutions for biomedical problems. Our study examines the critical aspects of bias and noise in the context of medical decision support, aiming to provide actionable solutions. Contributions: A key contribution of our work is the recognition that measurement processes introduce noise and bias arising from human data interpretation and selection. We introduce the concept of "noise-bias cascade" to explain their interconnected nature. While current AI models handle noise well, bias remains a significant obstacle in achieving practical performance in these models. Our analysis spans the entire AI development lifecycle, from data collection to model deployment. RECOMMENDATIONS To effectively mitigate bias, we assert the need to implement additional measures such as rigorous study design; appropriate statistical analysis; transparent reporting; and diverse research representation. Furthermore, we strongly recommend the integration of uncertainty measures during model deployment to ensure the utmost fairness and inclusivity. These comprehensive recommendations aim to minimize both bias and noise, thereby improving the performance of future medical decision support systems.
Collapse
Affiliation(s)
- Oliver Faust
- School of Computing and Information Science, Anglia Ruskin University, Cambridge Campus, Cambridge CB1 1PT, UK
| | - Massimo Salvi
- PoliToBIOMed Lab, Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca Degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (F.M.)
| | - Prabal Datta Barua
- Cogninet Australia, Sydney, NSW 2010, Australia;
- School of Business (Information Systems), University of Southern Queensland, Toowoomba, QLD 4350, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
- Australian International Institute of Higher Education, Sydney, NSW 2000, Australia
- School of Science and Technology, University of New England, Armidale, NSW 2351, Australia;
- School of Biosciences, Taylor’s University, Subang Jaya 47500, Malaysia
- School of Computing, SRM Institute of Science and Technology, Kattankulathur 603203, India
- School of Science and Technology, Kumamoto University, Kumamoto 860-8555, Japan
- Sydney School of Education and Social Work, University of Sydney, Camperdown, NSW 2050, Australia
| | - Subrata Chakraborty
- School of Science and Technology, University of New England, Armidale, NSW 2351, Australia;
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia
- Griffith Business School, Griffith University, Brisbane, QLD 4111, Australia
| | - Filippo Molinari
- PoliToBIOMed Lab, Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca Degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (F.M.)
| | - U. Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, QLD 4300, Australia;
- Centre for Health Research, University of Southern Queensland, Ipswich, QLD 4305, Australia
| |
Collapse
|
6
|
Jha D, Sharma V, Banik D, Bhattacharya D, Roy K, Hicks SA, Tomar NK, Thambawita V, Krenzer A, Ji GP, Poudel S, Batchkala G, Alam S, Ahmed AMA, Trinh QH, Khan Z, Nguyen TP, Shrestha S, Nathan S, Gwak J, Jha RK, Zhang Z, Schlaefer A, Bhattacharjee D, Bhuyan MK, Das PK, Fan DP, Parasa S, Ali S, Riegler MA, Halvorsen P, de Lange T, Bagci U. Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges. Med Image Anal 2025; 99:103307. [PMID: 39303447 DOI: 10.1016/j.media.2024.103307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 08/11/2024] [Accepted: 08/12/2024] [Indexed: 09/22/2024]
Abstract
Automatic analysis of colonoscopy images has been an active field of research motivated by the importance of early detection of precancerous polyps. However, detecting polyps during the live examination can be challenging due to various factors such as variation of skills and experience among the endoscopists, lack of attentiveness, and fatigue leading to a high polyp miss-rate. Therefore, there is a need for an automated system that can flag missed polyps during the examination and improve patient care. Deep learning has emerged as a promising solution to this challenge as it can assist endoscopists in detecting and classifying overlooked polyps and abnormalities in real time, improving the accuracy of diagnosis and enhancing treatment. In addition to the algorithm's accuracy, transparency and interpretability are crucial to explaining the whys and hows of the algorithm's prediction. Further, conclusions based on incorrect decisions may be fatal, especially in medicine. Despite these pitfalls, most algorithms are developed in private data, closed source, or proprietary software, and methods lack reproducibility. Therefore, to promote the development of efficient and transparent methods, we have organized the "Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image Segmentation (MedAI 2021)" competitions. The Medico 2020 challenge received submissions from 17 teams, while the MedAI 2021 challenge also gathered submissions from another 17 distinct teams in the following year. We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic. Our analysis revealed that the participants improved dice coefficient metrics from 0.8607 in 2020 to 0.8993 in 2021 despite adding diverse and challenging frames (containing irregular, smaller, sessile, or flat polyps), which are frequently missed during a routine clinical examination. For the instrument segmentation task, the best team obtained a mean Intersection over union metric of 0.9364. For the transparency task, a multi-disciplinary team, including expert gastroenterologists, accessed each submission and evaluated the team based on open-source practices, failure case analysis, ablation studies, usability and understandability of evaluations to gain a deeper understanding of the models' credibility for clinical deployment. The best team obtained a final transparency score of 21 out of 25. Through the comprehensive analysis of the challenge, we not only highlight the advancements in polyp and surgical instrument segmentation but also encourage subjective evaluation for building more transparent and understandable AI-based colonoscopy systems. Moreover, we discuss the need for multi-center and out-of-distribution testing to address the current limitations of the methods to reduce the cancer burden and improve patient care.
Collapse
Affiliation(s)
- Debesh Jha
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA.
| | | | | | - Debayan Bhattacharya
- Institute of Medical Technology and Intelligent Systems, Technische Universität Hamburg, Germany
| | | | | | - Nikhil Kumar Tomar
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | | | | | - Ge-Peng Ji
- College of Engineering, Australian National University, Canberra, Australia
| | - Sahadev Poudel
- Department of IT Convergence Engineering, Gachon University, Seongnam 13120, South Korea
| | - George Batchkala
- Department of Engineering Science, University of Oxford, Oxford, UK
| | | | | | - Quoc-Huy Trinh
- Faculty of Information Technology, University of Science, VNU-HCM, Viet Nam
| | - Zeshan Khan
- National University of Computer and Emerging Sciences, Karachi Campus, Pakistan
| | - Tien-Phat Nguyen
- Faculty of Information Technology, University of Science, VNU-HCM, Viet Nam
| | - Shruti Shrestha
- NepAL Applied Mathematics and Informatics Institute for Research (NAAMII), Kathmandu, Nepal
| | | | - Jeonghwan Gwak
- Department of Software, Korea National University of Transportation, Chungju-si, South Korea
| | - Ritika K Jha
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Zheyuan Zhang
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| | - Alexander Schlaefer
- Institute of Medical Technology and Intelligent Systems, Technische Universität Hamburg, Germany
| | | | - M K Bhuyan
- Indian Institute of Technology, Guwahati, India
| | | | - Deng-Ping Fan
- Computer Vision Lab (CVL), ETH Zurich, Zurich, Switzerland
| | | | - Sharib Ali
- School of Computing, University of Leeds, LS2 9JT, Leeds, United Kingdom
| | - Michael A Riegler
- SimulaMet, Oslo, Norway; Oslo Metropolitan University, Oslo, Norway.
| | - Pål Halvorsen
- SimulaMet, Oslo, Norway; Oslo Metropolitan University, Oslo, Norway
| | - Thomas de Lange
- Department of Medicine and Emergencies - Mölndal Sahlgrenska University Hospital, Region Västra Götaland, Sweden; Department of Molecular and Clinical Medicin, Sahlgrenska Academy, University of Gothenburg, Sweden
| | - Ulas Bagci
- Machine & Hybrid Intelligence Lab, Department of Radiology, Northwestern University, Chicago, USA
| |
Collapse
|
7
|
Krikid F, Rositi H, Vacavant A. State-of-the-Art Deep Learning Methods for Microscopic Image Segmentation: Applications to Cells, Nuclei, and Tissues. J Imaging 2024; 10:311. [PMID: 39728208 DOI: 10.3390/jimaging10120311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Revised: 11/20/2024] [Accepted: 12/02/2024] [Indexed: 12/28/2024] Open
Abstract
Microscopic image segmentation (MIS) is a fundamental task in medical imaging and biological research, essential for precise analysis of cellular structures and tissues. Despite its importance, the segmentation process encounters significant challenges, including variability in imaging conditions, complex biological structures, and artefacts (e.g., noise), which can compromise the accuracy of traditional methods. The emergence of deep learning (DL) has catalyzed substantial advancements in addressing these issues. This systematic literature review (SLR) provides a comprehensive overview of state-of-the-art DL methods developed over the past six years for the segmentation of microscopic images. We critically analyze key contributions, emphasizing how these methods specifically tackle challenges in cell, nucleus, and tissue segmentation. Additionally, we evaluate the datasets and performance metrics employed in these studies. By synthesizing current advancements and identifying gaps in existing approaches, this review not only highlights the transformative potential of DL in enhancing diagnostic accuracy and research efficiency but also suggests directions for future research. The findings of this study have significant implications for improving methodologies in medical and biological applications, ultimately fostering better patient outcomes and advancing scientific understanding.
Collapse
Affiliation(s)
- Fatma Krikid
- Institut Pascal, CNRS, Clermont Auvergne INP, Université Clermont Auvergne, F-63000 Clermont-Ferrand, France
| | - Hugo Rositi
- LORIA, CNRS, Université de Lorraine, F-54000 Nancy, France
| | - Antoine Vacavant
- Institut Pascal, CNRS, Clermont Auvergne INP, Université Clermont Auvergne, F-63000 Clermont-Ferrand, France
| |
Collapse
|
8
|
Sinonquel P, Eelbode T, Pech O, De Wulf D, Dewint P, Neumann H, Antonelli G, Iacopini F, Tate D, Lemmers A, Pilonis ND, Kaminski MF, Roelandt P, Hassan C, Ingrid D, Maes F, Bisschops R. Clinical consequences of computer-aided colorectal polyp detection. Gut 2024; 73:1974-1983. [PMID: 38876773 DOI: 10.1136/gutjnl-2024-331943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 06/02/2024] [Indexed: 06/16/2024]
Abstract
BACKGROUND AND AIM Randomised trials show improved polyp detection with computer-aided detection (CADe), mostly of small lesions. However, operator and selection bias may affect CADe's true benefit. Clinical outcomes of increased detection have not yet been fully elucidated. METHODS In this multicentre trial, CADe combining convolutional and recurrent neural networks was used for polyp detection. Blinded endoscopists were monitored in real time by a second observer with CADe access. CADe detections prompted reinspection. Adenoma detection rates (ADR) and polyp detection rates were measured prestudy and poststudy. Histological assessments were done by independent histopathologists. The primary outcome compared polyp detection between endoscopists and CADe. RESULTS In 946 patients (51.9% male, mean age 64), a total of 2141 polyps were identified, including 989 adenomas. CADe was not superior to human polyp detection (sensitivity 94.6% vs 96.0%) but outperformed them when restricted to adenomas. Unblinding led to an additional yield of 86 true positive polyp detections (1.1% ADR increase per patient; 73.8% were <5 mm). CADe also increased non-neoplastic polyp detection by an absolute value of 4.9% of the cases (1.8% increase of entire polyp load). Procedure time increased with 6.6±6.5 min (+42.6%). In 22/946 patients, the additional detection of adenomas changed surveillance intervals (2.3%), mostly by increasing the number of small adenomas beyond the cut-off. CONCLUSION Even if CADe appears to be slightly more sensitive than human endoscopists, the additional gain in ADR was minimal and follow-up intervals rarely changed. Additional inspection of non-neoplastic lesions was increased, adding to the inspection and/or polypectomy workload.
Collapse
Affiliation(s)
- Pieter Sinonquel
- Gastroenterology and Hepatology, UZ Leuven, Leuven, Belgium
- Translational Research in Gastrointestinal Diseases (TARGID), KU Leuven Biomedical Sciences Group, Leuven, Belgium
| | - Tom Eelbode
- Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium
| | - Oliver Pech
- Gastroenterology and Hepatology, Krankenhaus Barmherzige Bruder Regensburg, Regensburg, Germany
| | - Dominiek De Wulf
- Gastroenterology and Hepatology, AZ Delta vzw, Roeselare, Belgium
| | - Pieter Dewint
- Gastroenterology and Hepatology, AZ Maria Middelares vzw, Gent, Belgium
| | - Helmut Neumann
- Gastroenterology and Hepatology, Gastrozentrum Lippe, Bad Salzuflen, Germany
| | - Giulio Antonelli
- Gastroenterology and Digestive Endoscopy Unit, Ospedale Nuovo Regina Margherita, Roma, Italy
| | - Federico Iacopini
- Gastroenterology and Digestive endoscopy, Ospedale dei Castelli, Ariccia, Italy
| | - David Tate
- Gastroenterology and Hepatology, UZ Gent, Gent, Belgium
| | - Arnaud Lemmers
- Gastroenterology and Hepatology, ULB Erasme, Bruxelles, Belgium
| | | | - Michal Filip Kaminski
- Department of Gastroenterology, Hepatology and Oncology, Medical Centre fo Postgraduate Education, Warsaw, Poland
- Department of Gastroenterological Oncology, The Maria Sklodowska-Curie Memorial Cancer Centre, Instytute of Oncology, Warsaw, Poland
| | - Philip Roelandt
- Gastroenterology and Hepatology, UZ Leuven, Leuven, Belgium
- Translational Research in Gastrointestinal Diseases (TARGID), KU Leuven Biomedical Sciences Group, Leuven, Belgium
| | - Cesare Hassan
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- IRCCS Humanitas Research Hospital, Milan, Italy
| | - Demedts Ingrid
- Gastroenterology and Hepatology, UZ Leuven, Leuven, Belgium
- Translational Research in Gastrointestinal Diseases (TARGID), KU Leuven Biomedical Sciences Group, Leuven, Belgium
| | - Frederik Maes
- Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium
| | - Raf Bisschops
- Gastroenterology and Hepatology, UZ Leuven, Leuven, Belgium
- Translational Research in Gastrointestinal Diseases (TARGID), KU Leuven Biomedical Sciences Group, Leuven, Belgium
| |
Collapse
|
9
|
Tan J, Yuan J, Fu X, Bai Y. Colonoscopy polyp classification via enhanced scattering wavelet Convolutional Neural Network. PLoS One 2024; 19:e0302800. [PMID: 39392783 PMCID: PMC11469526 DOI: 10.1371/journal.pone.0302800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Accepted: 08/26/2024] [Indexed: 10/13/2024] Open
Abstract
Among the most common cancers, colorectal cancer (CRC) has a high death rate. The best way to screen for colorectal cancer (CRC) is with a colonoscopy, which has been shown to lower the risk of the disease. As a result, Computer-aided polyp classification technique is applied to identify colorectal cancer. But visually categorizing polyps is difficult since different polyps have different lighting conditions. Different from previous works, this article presents Enhanced Scattering Wavelet Convolutional Neural Network (ESWCNN), a polyp classification technique that combines Convolutional Neural Network (CNN) and Scattering Wavelet Transform (SWT) to improve polyp classification performance. This method concatenates simultaneously learnable image filters and wavelet filters on each input channel. The scattering wavelet filters can extract common spectral features with various scales and orientations, while the learnable filters can capture image spatial features that wavelet filters may miss. A network architecture for ESWCNN is designed based on these principles and trained and tested using colonoscopy datasets (two public datasets and one private dataset). An n-fold cross-validation experiment was conducted for three classes (adenoma, hyperplastic, serrated) achieving a classification accuracy of 96.4%, and 94.8% accuracy in two-class polyp classification (positive and negative). In the three-class classification, correct classification rates of 96.2% for adenomas, 98.71% for hyperplastic polyps, and 97.9% for serrated polyps were achieved. The proposed method in the two-class experiment reached an average sensitivity of 96.7% with 93.1% specificity. Furthermore, we compare the performance of our model with the state-of-the-art general classification models and commonly used CNNs. Six end-to-end models based on CNNs were trained using 2 dataset of video sequences. The experimental results demonstrate that the proposed ESWCNN method can effectively classify polyps with higher accuracy and efficacy compared to the state-of-the-art CNN models. These findings can provide guidance for future research in polyp classification.
Collapse
Affiliation(s)
- Jun Tan
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
- Guangdong Province Key Laboratory of Computational Science, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Jiamin Yuan
- Health construction administration center, Guangdong Provincial Hospital of Chinese Medicine, Guangzhou, Guangdong, China
- The Second Affiliated Hospital of Guangzhou University of Traditional Chinese Medicine(TCM), Guangzhou, Guangdong, China
| | - Xiaoyong Fu
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Yilin Bai
- School of Mathematics, Sun Yat-Sen University, Guangzhou, Guangdong, China
- China Southern Airlines, Guangzhou, Guangdong, China
| |
Collapse
|
10
|
Xu W, Xu R, Wang C, Li X, Xu S, Guo L. PSTNet: Enhanced Polyp Segmentation With Multi-Scale Alignment and Frequency Domain Integration. IEEE J Biomed Health Inform 2024; 28:6042-6053. [PMID: 38954569 DOI: 10.1109/jbhi.2024.3421550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/04/2024]
Abstract
Accurate segmentation of colorectal polyps in colonoscopy images is crucial for effective diagnosis and management of colorectal cancer (CRC). However, current deep learning-based methods primarily rely on fusing RGB information across multiple scales, leading to limitations in accurately identifying polyps due to restricted RGB domain information and challenges in feature misalignment during multi-scale aggregation. To address these limitations, we propose the Polyp Segmentation Network with Shunted Transformer (PSTNet), a novel approach that integrates both RGB and frequency domain cues present in the images. PSTNet comprises three key modules: the Frequency Characterization Attention Module (FCAM) for extracting frequency cues and capturing polyp characteristics, the Feature Supplementary Alignment Module (FSAM) for aligning semantic information and reducing misalignment noise, and the Cross Perception localization Module (CPM) for synergizing frequency cues with high-level semantics to achieve efficient polyp segmentation. Extensive experiments on challenging datasets demonstrate PSTNet's significant improvement in polyp segmentation accuracy across various metrics, consistently outperforming state-of-the-art methods. The integration of frequency domain cues and the novel architectural design of PSTNet contribute to advancing computer-assisted polyp segmentation, facilitating more accurate diagnosis and management of CRC.
Collapse
|
11
|
Wang R, Zheng G. PFMNet: Prototype-based feature mapping network for few-shot domain adaptation in medical image segmentation. Comput Med Imaging Graph 2024; 116:102406. [PMID: 38824715 DOI: 10.1016/j.compmedimag.2024.102406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 05/23/2024] [Accepted: 05/24/2024] [Indexed: 06/04/2024]
Abstract
Lack of data is one of the biggest hurdles for rare disease research using deep learning. Due to the lack of rare-disease images and annotations, training a robust network for automatic rare-disease image segmentation is very challenging. To address this challenge, few-shot domain adaptation (FSDA) has emerged as a practical research direction, aiming to leverage a limited number of annotated images from a target domain to facilitate adaptation of models trained on other large datasets in a source domain. In this paper, we present a novel prototype-based feature mapping network (PFMNet) designed for FSDA in medical image segmentation. PFMNet adopts an encoder-decoder structure for segmentation, with the prototype-based feature mapping (PFM) module positioned at the bottom of the encoder-decoder structure. The PFM module transforms high-level features from the target domain into the source domain-like features that are more easily comprehensible by the decoder. By leveraging these source domain-like features, the decoder can effectively perform few-shot segmentation in the target domain and generate accurate segmentation masks. We evaluate the performance of PFMNet through experiments on three typical yet challenging few-shot medical image segmentation tasks: cross-center optic disc/cup segmentation, cross-center polyp segmentation, and cross-modality cardiac structure segmentation. We consider four different settings: 5-shot, 10-shot, 15-shot, and 20-shot. The experimental results substantiate the efficacy of our proposed approach for few-shot domain adaptation in medical image segmentation.
Collapse
Affiliation(s)
- Runze Wang
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China
| | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China.
| |
Collapse
|
12
|
Jiang Y, Zhang Z, Hu Y, Li G, Wan X, Wu S, Cui S, Huang S, Li Z. ECC-PolypDet: Enhanced CenterNet With Contrastive Learning for Automatic Polyp Detection. IEEE J Biomed Health Inform 2024; 28:4785-4796. [PMID: 37983159 DOI: 10.1109/jbhi.2023.3334240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
Accurate polyp detection is critical for early colorectal cancer diagnosis. Although remarkable progress has been achieved in recent years, the complex colon environment and concealed polyps with unclear boundaries still pose severe challenges in this area. Existing methods either involve computationally expensive context aggregation or lack prior modeling of polyps, resulting in poor performance in challenging cases. In this paper, we propose the Enhanced CenterNet with Contrastive Learning (ECC-PolypDet), a two-stage training & end-to-end inference framework that leverages images and bounding box annotations to train a general model and fine-tune it based on the inference score to obtain a final robust model. Specifically, we conduct Box-assisted Contrastive Learning (BCL) during training to minimize the intra-class difference and maximize the inter-class difference between foreground polyps and backgrounds, enabling our model to capture concealed polyps. Moreover, to enhance the recognition of small polyps, we design the Semantic Flow-guided Feature Pyramid Network (SFFPN) to aggregate multi-scale features and the Heatmap Propagation (HP) module to boost the model's attention on polyp targets. In the fine-tuning stage, we introduce the IoU-guided Sample Re-weighting (ISR) mechanism to prioritize hard samples by adaptively adjusting the loss weight for each sample during fine-tuning. Extensive experiments on six large-scale colonoscopy datasets demonstrate the superiority of our model compared with previous state-of-the-art detectors.
Collapse
|
13
|
Lin Q, Tan W, Cai S, Yan B, Li J, Zhong Y. Lesion-Decoupling-Based Segmentation With Large-Scale Colon and Esophageal Datasets for Early Cancer Diagnosis. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:11142-11156. [PMID: 37028330 DOI: 10.1109/tnnls.2023.3248804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Lesions of early cancers often show flat, small, and isochromatic characteristics in medical endoscopy images, which are difficult to be captured. By analyzing the differences between the internal and external features of the lesion area, we propose a lesion-decoupling-based segmentation (LDS) network for assisting early cancer diagnosis. We introduce a plug-and-play module called self-sampling similar feature disentangling module (FDM) to obtain accurate lesion boundaries. Then, we propose a feature separation loss (FSL) function to separate pathological features from normal ones. Moreover, since physicians make diagnoses with multimodal data, we propose a multimodal cooperative segmentation network with two different modal images as input: white-light images (WLIs) and narrowband images (NBIs). Our FDM and FSL show a good performance for both single-modal and multimodal segmentations. Extensive experiments on five backbones prove that our FDM and FSL can be easily applied to different backbones for a significant lesion segmentation accuracy improvement, and the maximum increase of mean Intersection over Union (mIoU) is 4.58. For colonoscopy, we can achieve up to mIoU of 91.49 on our Dataset A and 84.41 on the three public datasets. For esophagoscopy, mIoU of 64.32 is best achieved on the WLI dataset and 66.31 on the NBI dataset.
Collapse
|
14
|
Huang X, Wang L, Jiang S, Xu L. DHAFormer: Dual-channel hybrid attention network with transformer for polyp segmentation. PLoS One 2024; 19:e0306596. [PMID: 38985710 PMCID: PMC11236112 DOI: 10.1371/journal.pone.0306596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 06/17/2024] [Indexed: 07/12/2024] Open
Abstract
The accurate early diagnosis of colorectal cancer significantly relies on the precise segmentation of polyps in medical images. Current convolution-based and transformer-based segmentation methods show promise but still struggle with the varied sizes and shapes of polyps and the often low contrast between polyps and their background. This research introduces an innovative approach to confronting the aforementioned challenges by proposing a Dual-Channel Hybrid Attention Network with Transformer (DHAFormer). Our proposed framework features a multi-scale channel fusion module, which excels at recognizing polyps across a spectrum of sizes and shapes. Additionally, the framework's dual-channel hybrid attention mechanism is innovatively conceived to reduce background interference and improve the foreground representation of polyp features by integrating local and global information. The DHAFormer demonstrates significant improvements in the task of polyp segmentation compared to currently established methodologies.
Collapse
Affiliation(s)
- Xuejie Huang
- School of Computer Science and Technology, Xinjiang University, Urumqi, China
| | - Liejun Wang
- School of Computer Science and Technology, Xinjiang University, Urumqi, China
| | - Shaochen Jiang
- School of Computer Science and Technology, Xinjiang University, Urumqi, China
| | - Lianghui Xu
- School of Computer Science and Technology, Xinjiang University, Urumqi, China
| |
Collapse
|
15
|
Wan JJ, Zhu PC, Chen BL, Yu YT. A semantic feature enhanced YOLOv5-based network for polyp detection from colonoscopy images. Sci Rep 2024; 14:15478. [PMID: 38969765 PMCID: PMC11226707 DOI: 10.1038/s41598-024-66642-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 07/03/2024] [Indexed: 07/07/2024] Open
Abstract
Colorectal cancer (CRC) is a common digestive system tumor with high morbidity and mortality worldwide. At present, the use of computer-assisted colonoscopy technology to detect polyps is relatively mature, but it still faces some challenges, such as missed or false detection of polyps. Therefore, how to improve the detection rate of polyps more accurately is the key to colonoscopy. To solve this problem, this paper proposes an improved YOLOv5-based cancer polyp detection method for colorectal cancer. The method is designed with a new structure called P-C3 incorporated into the backbone and neck network of the model to enhance the expression of features. In addition, a contextual feature augmentation module was introduced to the bottom of the backbone network to increase the receptive field for multi-scale feature information and to focus on polyp features by coordinate attention mechanism. The experimental results show that compared with some traditional target detection algorithms, the model proposed in this paper has significant advantages for the detection accuracy of polyp, especially in the recall rate, which largely solves the problem of missed detection of polyps. This study will contribute to improve the polyp/adenoma detection rate of endoscopists in the process of colonoscopy, and also has important significance for the development of clinical work.
Collapse
Affiliation(s)
- Jing-Jing Wan
- Department of Gastroenterology, The Second People's Hospital of Huai'an, The Affiliated Huai'an Hospital of Xuzhou Medical University, Huaian, 223023, Jiangsu, China.
| | - Peng-Cheng Zhu
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China.
| | - Bo-Lun Chen
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China
| | - Yong-Tao Yu
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China
| |
Collapse
|
16
|
Li Z, Yi M, Uneri A, Niu S, Jones C. RTA-Former: Reverse Transformer Attention for Polyp Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-5. [PMID: 40031481 DOI: 10.1109/embc53108.2024.10782181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Polyp segmentation is a key aspect of colorectal cancer prevention, enabling early detection and guiding subsequent treatments. Intelligent diagnostic tools, including deep learning solutions, are widely explored to streamline and potentially automate this process. However, even with many powerful network architectures, there still comes the problem of producing accurate edge segmentation. In this paper, we introduce a novel network, namely RTA-Former, that employs a transformer model as the encoder backbone and innovatively adapts Reverse Attention (RA) with a transformer stage in the decoder for enhanced edge segmentation. The results of the experiments illustrate that RTA-Former achieves state-of-the-art (SOTA) performance in five polyp segmentation datasets. The strong capability of RTA-Former holds promise in improving the accuracy of Transformer-based polyp segmentation, potentially leading to better clinical decisions and patient outcomes. Our code is publicly available on ${\color{Magenta}{\text{GitHub}}}$.
Collapse
|
17
|
Biffi C, Antonelli G, Bernhofer S, Hassan C, Hirata D, Iwatate M, Maieron A, Salvagnini P, Cherubini A. REAL-Colon: A dataset for developing real-world AI applications in colonoscopy. Sci Data 2024; 11:539. [PMID: 38796533 PMCID: PMC11127922 DOI: 10.1038/s41597-024-03359-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 05/10/2024] [Indexed: 05/28/2024] Open
Abstract
Detection and diagnosis of colon polyps are key to preventing colorectal cancer. Recent evidence suggests that AI-based computer-aided detection (CADe) and computer-aided diagnosis (CADx) systems can enhance endoscopists' performance and boost colonoscopy effectiveness. However, most available public datasets primarily consist of still images or video clips, often at a down-sampled resolution, and do not accurately represent real-world colonoscopy procedures. We introduce the REAL-Colon (Real-world multi-center Endoscopy Annotated video Library) dataset: a compilation of 2.7 M native video frames from sixty full-resolution, real-world colonoscopy recordings across multiple centers. The dataset contains 350k bounding-box annotations, each created under the supervision of expert gastroenterologists. Comprehensive patient clinical data, colonoscopy acquisition information, and polyp histopathological information are also included in each video. With its unprecedented size, quality, and heterogeneity, the REAL-Colon dataset is a unique resource for researchers and developers aiming to advance AI research in colonoscopy. Its openness and transparency facilitate rigorous and reproducible research, fostering the development and benchmarking of more accurate and reliable colonoscopy-related algorithms and models.
Collapse
Affiliation(s)
- Carlo Biffi
- Cosmo Intelligent Medical Devices, Dublin, Ireland.
| | - Giulio Antonelli
- Gastroenterology and Digestive Endoscopy Unit, Ospedale dei Castelli (N.O.C.), Rome, Italy
| | - Sebastian Bernhofer
- Karl Landsteiner University of Health Sciences, Krems, Austria
- Department of Internal Medicine 2, University Hospital St. Pölten, St. Pölten, Austria
| | - Cesare Hassan
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, Italy
- Endoscopy Unit, Humanitas Clinical and Research Center IRCCS, Rozzano, Italy
| | - Daizen Hirata
- Gastrointestinal Center, Sano Hospital, Hyogo, Japan
| | - Mineo Iwatate
- Gastrointestinal Center, Sano Hospital, Hyogo, Japan
| | - Andreas Maieron
- Karl Landsteiner University of Health Sciences, Krems, Austria
- Department of Internal Medicine 2, University Hospital St. Pölten, St. Pölten, Austria
| | | | - Andrea Cherubini
- Cosmo Intelligent Medical Devices, Dublin, Ireland.
- Milan Center for Neuroscience, University of Milano-Bicocca, Milano, Italy.
| |
Collapse
|
18
|
Hsu CM, Chen TH, Hsu CC, Wu CH, Lin CJ, Le PH, Lin CY, Kuo T. Two-stage deep-learning-based colonoscopy polyp detection incorporating fisheye and reflection correction. J Gastroenterol Hepatol 2024; 39:733-739. [PMID: 38225761 DOI: 10.1111/jgh.16470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 11/28/2023] [Accepted: 12/14/2023] [Indexed: 01/17/2024]
Abstract
BACKGROUND AND AIM Colonoscopy is a useful method for the diagnosis and management of colorectal diseases. Many computer-aided systems have been developed to assist clinicians in detecting colorectal lesions by analyzing colonoscopy images. However, fisheye-lens distortion and light reflection in colonoscopy images can substantially affect the clarity of these images and their utility in detecting polyps. This study proposed a two-stage deep-learning model to correct distortion and reflections in colonoscopy images and thus facilitate polyp detection. METHODS Images were collected from the PolypSet dataset, the Kvasir-SEG dataset, and one medical center's patient archiving and communication system. The training, validation, and testing datasets comprised 808, 202, and 1100 images, respectively. The first stage involved the correction of fisheye-related distortion in colonoscopy images and polyp detection, which was performed using a convolutional neural network. The second stage involved the use of generative and adversarial networks for correcting reflective colonoscopy images before the convolutional neural network was used for polyp detection. RESULTS The model had higher accuracy when it was validated using corrected images than when it was validated using uncorrected images (96.8% vs 90.8%, P < 0.001). The model's accuracy in detecting polyps in the Kvasir-SEG dataset reached 96%, and the area under the receiver operating characteristic curve was 0.94. CONCLUSION The proposed model can facilitate the clinical diagnosis of colorectal polyps and improve the quality of colonoscopy.
Collapse
Affiliation(s)
- Chen-Ming Hsu
- Department of Gastroenterology and Hepatology, Chang Gung Memorial Hospital Taoyuan Branch, Taoyuan, Taiwan
- Department of Gastroenterology and Hepatology, Chang Gung Memorial Hospital Linkou Main Branch, Taoyuan, Taiwan
- Chang Gung University College of Medicine, Taoyuan, Taiwan
| | - Tsung-Hsing Chen
- Department of Gastroenterology and Hepatology, Chang Gung Memorial Hospital Linkou Main Branch, Taoyuan, Taiwan
- Chang Gung University College of Medicine, Taoyuan, Taiwan
| | - Chien-Chang Hsu
- Department of Computer Science and Information Engineering, Fu Jen Catholic University, Taipei, Taiwan
| | - Che-Hao Wu
- Department of Computer Science and Information Engineering, Fu Jen Catholic University, Taipei, Taiwan
| | - Chun-Jung Lin
- Department of Gastroenterology and Hepatology, Chang Gung Memorial Hospital Linkou Main Branch, Taoyuan, Taiwan
- Chang Gung University College of Medicine, Taoyuan, Taiwan
| | - Puo-Hsien Le
- Department of Gastroenterology and Hepatology, Chang Gung Memorial Hospital Linkou Main Branch, Taoyuan, Taiwan
- Chang Gung University College of Medicine, Taoyuan, Taiwan
| | - Cheng-Yu Lin
- Department of Gastroenterology and Hepatology, Chang Gung Memorial Hospital Linkou Main Branch, Taoyuan, Taiwan
| | - Tony Kuo
- Department of Gastroenterology and Hepatology, Chang Gung Memorial Hospital Linkou Main Branch, Taoyuan, Taiwan
| |
Collapse
|
19
|
Zhang Y, Yang G, Gong C, Zhang J, Wang S, Wang Y. Polyp segmentation with interference filtering and dynamic uncertainty mining. Phys Med Biol 2024; 69:075016. [PMID: 38382099 DOI: 10.1088/1361-6560/ad2b94] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 02/21/2024] [Indexed: 02/23/2024]
Abstract
Objective.Accurate polyp segmentation from colo-noscopy images plays a crucial role in the early diagnosis and treatment of colorectal cancer. However, existing polyp segmentation methods are inevitably affected by various image noises, such as reflections, motion blur, and feces, which significantly affect the performance and generalization of the model. In addition, coupled with ambiguous boundaries between polyps and surrounding tissue, i.e. small inter-class differences, accurate polyp segmentation remains a challenging problem.Approach.To address these issues, we propose a novel two-stage polyp segmentation method that leverages a preprocessing sub-network (Pre-Net) and a dynamic uncertainty mining network (DUMNet) to improve the accuracy of polyp segmentation. Pre-Net identifies and filters out interference regions before feeding the colonoscopy images to the polyp segmentation network DUMNet. Considering the confusing polyp boundaries, DUMNet employs the uncertainty mining module (UMM) to dynamically focus on foreground, background, and uncertain regions based on different pixel confidences. UMM helps to mine and enhance more detailed context, leading to coarse-to-fine polyp segmentation and precise localization of polyp regions.Main results.We conduct experiments on five popular polyp segmentation benchmarks: ETIS, CVC-ClinicDB, CVC-ColonDB, EndoScene, and Kvasir. Our method achieves state-of-the-art performance. Furthermore, the proposed Pre-Net has strong portability and can improve the accuracy of existing polyp segmentation models.Significance.The proposed method improves polyp segmentation performance by eliminating interference and mining uncertain regions. This aids doctors in making precise and reduces the risk of colorectal cancer. Our code will be released athttps://github.com/zyh5119232/DUMNet.
Collapse
Affiliation(s)
- Yunhua Zhang
- Northeastern University, Shenyang 110819, People's Republic of China
- DUT Artificial Intelligence Institute, Dalian 116024, People's Republic of China
| | - Gang Yang
- Northeastern University, Shenyang 110819, People's Republic of China
| | - Congjin Gong
- Northeastern University, Shenyang 110819, People's Republic of China
| | - Jianhao Zhang
- Northeastern University, Shenyang 110819, People's Republic of China
| | - Shuo Wang
- Northeastern University, Shenyang 110819, People's Republic of China
| | - Yutao Wang
- Northeastern University, Shenyang 110819, People's Republic of China
| |
Collapse
|
20
|
Xu C, Fan K, Mo W, Cao X, Jiao K. Dual ensemble system for polyp segmentation with submodels adaptive selection ensemble. Sci Rep 2024; 14:6152. [PMID: 38485963 PMCID: PMC10940608 DOI: 10.1038/s41598-024-56264-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 03/04/2024] [Indexed: 03/18/2024] Open
Abstract
Colonoscopy is one of the main methods to detect colon polyps, and its detection is widely used to prevent and diagnose colon cancer. With the rapid development of computer vision, deep learning-based semantic segmentation methods for colon polyps have been widely researched. However, the accuracy and stability of some methods in colon polyp segmentation tasks show potential for further improvement. In addition, the issue of selecting appropriate sub-models in ensemble learning for the colon polyp segmentation task still needs to be explored. In order to solve the above problems, we first implement the utilization of multi-complementary high-level semantic features through the Multi-Head Control Ensemble. Then, to solve the sub-model selection problem in training, we propose SDBH-PSO Ensemble for sub-model selection and optimization of ensemble weights for different datasets. The experiments were conducted on the public datasets CVC-ClinicDB, Kvasir, CVC-ColonDB, ETIS-LaribPolypDB and PolypGen. The results show that the DET-Former, constructed based on the Multi-Head Control Ensemble and the SDBH-PSO Ensemble, consistently provides improved accuracy across different datasets. Among them, the Multi-Head Control Ensemble demonstrated superior feature fusion capability in the experiments, and the SDBH-PSO Ensemble demonstrated excellent sub-model selection capability. The sub-model selection capabilities of the SDBH-PSO Ensemble will continue to have significant reference value and practical utility as deep learning networks evolve.
Collapse
Affiliation(s)
- Cun Xu
- Guilin University of Electronic Technology, Guilin, 541000, China
| | - Kefeng Fan
- China Electronics Standardization Institute, Beijing, 100007, China.
| | - Wei Mo
- Guilin University of Electronic Technology, Guilin, 541000, China
| | - Xuguang Cao
- Guilin University of Electronic Technology, Guilin, 541000, China
| | - Kaijie Jiao
- Guilin University of Electronic Technology, Guilin, 541000, China
| |
Collapse
|
21
|
Zheng J, Yan Y, Zhao L, Pan X. CGMA-Net: Cross-Level Guidance and Multi-Scale Aggregation Network for Polyp Segmentation. IEEE J Biomed Health Inform 2024; 28:1424-1435. [PMID: 38127598 DOI: 10.1109/jbhi.2023.3345479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
Colonoscopy is considered the best prevention and control method for colorectal cancer, which suffers extremely high rates of mortality and morbidity. Automated polyp segmentation of colonoscopy images is of great importance since manual polyp segmentation requires a considerable time of experienced specialists. However, due to the high similarity between polyps and mucosa, accompanied by the complex morphological features of colonic polyps, the performance of automatic polyp segmentation is still unsatisfactory. Accordingly, we propose a network, namely Cross-level Guidance and Multi-scale Aggregation (CGMA-Net), to earn a performance promotion. Specifically, three modules, including Cross-level Feature Guidance (CFG), Multi-scale Aggregation Decoder (MAD), and Details Refinement (DR), are individually proposed and synergistically assembled. With CFG, we generate spatial attention maps from the higher-level features and then multiply them with the lower-level features, highlighting the region of interest and suppressing the background information. In MAD, we parallelly use multiple dilated convolutions of different sizes to capture long-range dependencies between features. For DR, an asynchronous convolution is used along with the attention mechanism to enhance both the local details and the global information. The proposed CGMA-Net is evaluated on two benchmark datasets, i.e., CVC-ClinicDB and Kvasir-SEG, whose results demonstrate that our method not only presents state-of-the-art performance but also holds relatively fewer parameters. Concretely, we achieve the Dice Similarity Coefficient (DSC) of 91.85% and 95.73% on Kvasir-SEG and CVC-ClinicDB, respectively. The assessment of model generalization is also conducted, resulting in DSC scores of 86.25% and 86.97% on the two datasets respectively.
Collapse
|
22
|
Zhu Y, Lyu X, Tao X, Wu L, Yin A, Liao F, Hu S, Wang Y, Zhang M, Huang L, Wang J, Zhang C, Gong D, Jiang X, Zhao L, Yu H. A newly developed deep learning-based system for automatic detection and classification of small bowel lesions during double-balloon enteroscopy examination. BMC Gastroenterol 2024; 24:10. [PMID: 38166722 PMCID: PMC10759410 DOI: 10.1186/s12876-023-03067-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Accepted: 11/28/2023] [Indexed: 01/05/2024] Open
Abstract
BACKGROUND Double-balloon enteroscopy (DBE) is a standard method for diagnosing and treating small bowel disease. However, DBE may yield false-negative results due to oversight or inexperience. We aim to develop a computer-aided diagnostic (CAD) system for the automatic detection and classification of small bowel abnormalities in DBE. DESIGN AND METHODS A total of 5201 images were collected from Renmin Hospital of Wuhan University to construct a detection model for localizing lesions during DBE, and 3021 images were collected to construct a classification model for classifying lesions into four classes, protruding lesion, diverticulum, erosion & ulcer and angioectasia. The performance of the two models was evaluated using 1318 normal images and 915 abnormal images and 65 videos from independent patients and then compared with that of 8 endoscopists. The standard answer was the expert consensus. RESULTS For the image test set, the detection model achieved a sensitivity of 92% (843/915) and an area under the curve (AUC) of 0.947, and the classification model achieved an accuracy of 86%. For the video test set, the accuracy of the system was significantly better than that of the endoscopists (85% vs. 77 ± 6%, p < 0.01). For the video test set, the proposed system was superior to novices and comparable to experts. CONCLUSIONS We established a real-time CAD system for detecting and classifying small bowel lesions in DBE with favourable performance. ENDOANGEL-DBE has the potential to help endoscopists, especially novices, in clinical practice and may reduce the miss rate of small bowel lesions.
Collapse
Affiliation(s)
- Yijie Zhu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiaoguang Lyu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiao Tao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Lianlian Wu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Anning Yin
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Fei Liao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Shan Hu
- School of Computer Science, Wuhan University, Wuhan, China
| | - Yang Wang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Mengjiao Zhang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Li Huang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Junxiao Wang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Chenxia Zhang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Dexin Gong
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiaoda Jiang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Liang Zhao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China.
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China.
| | - Honggang Yu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China.
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China.
| |
Collapse
|
23
|
Wu L, Gao X, Hu Z, Zhang S. Pattern-Aware Transformer: Hierarchical Pattern Propagation in Sequential Medical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:405-415. [PMID: 37594875 DOI: 10.1109/tmi.2023.3306468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/20/2023]
Abstract
This paper investigates how to effectively mine contextual information among sequential images and jointly model them in medical imaging tasks. Different from state-of-the-art methods that model sequential correlations via point-wise token encoding, this paper develops a novel hierarchical pattern-aware tokenization strategy. It handles distinct visual patterns independently and hierarchically, which not only ensures the full flexibility of attention aggregation under different pattern representations but also preserves both local and global information simultaneously. Based on this strategy, we propose a Pattern-Aware Transformer (PATrans) featuring a global-local dual-path pattern-aware cross-attention mechanism to achieve hierarchical pattern matching and propagation among sequential images. Furthermore, PATrans is plug-and-play and can be seamlessly integrated into various backbone networks for diverse downstream sequence modeling tasks. We demonstrate its general application paradigm across four domains and five benchmarks in video object detection and 3D volumetric semantic segmentation tasks, respectively. Impressively, PATrans sets new state-of-the-art across all these benchmarks, i.e., CVC-Video (92.3% detection F1), ASU-Mayo (99.1% localization F1), Lung Tumor (78.59% DSC), Nasopharynx Tumor (75.50% DSC), and Kidney Tumor (87.53% DSC). Codes and models are available at https://github.com/GGaoxiang/PATrans.
Collapse
|
24
|
Dao HV, Nguyen BP, Nguyen TT, Lam HN, Nguyen TTH, Dang TT, Hoang LB, Le HQ, Dao LV. Application of artificial intelligence in gastrointestinal endoscopy in Vietnam: a narrative review. Ther Adv Gastrointest Endosc 2024; 17:26317745241306562. [PMID: 39734422 PMCID: PMC11672465 DOI: 10.1177/26317745241306562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Accepted: 11/25/2024] [Indexed: 12/31/2024] Open
Abstract
The utilization of artificial intelligence (AI) in gastrointestinal (GI) endoscopy has witnessed significant progress and promising results in recent years worldwide. From 2019 to 2023, the European Society of Gastrointestinal Endoscopy has released multiple guidelines/consensus with recommendations on integrating AI for detecting and classifying lesions in practical endoscopy. In Vietnam, since 2019, several preliminary studies have been conducted to develop AI algorithms for GI endoscopy, focusing on lesion detection. These studies have yielded high accuracy results ranging from 86% to 92%. For upper GI endoscopy, ongoing research directions comprise image quality assessment, detection of anatomical landmarks, simulating image-enhanced endoscopy, and semi-automated tools supporting the delineation of GI lesions on endoscopic images. For lower GI endoscopy, most studies focus on developing AI algorithms for colorectal polyps' detection and classification based on the risk of malignancy. In conclusion, the application of AI in this field represents a promising research direction, presenting challenges and opportunities for real-world implementation within the Vietnamese healthcare context.
Collapse
Affiliation(s)
- Hang Viet Dao
- Research and Education Department, Institute of Gastroenterology and Hepatology, 09 Dao Duy Anh Street, Dong Da District, Hanoi City, Vietnam
- Department of Internal Medicine, Hanoi Medical University, Hanoi, Vietnam
- Endoscopy Center, Hanoi Medical University Hospital, Hanoi, Vietnam
| | | | | | - Hoa Ngoc Lam
- Institute of Gastroenterology and Hepatology, Hanoi, Vietnam
| | | | - Thao Thi Dang
- Institute of Gastroenterology and Hepatology, Hanoi, Vietnam
| | - Long Bao Hoang
- Institute of Gastroenterology and Hepatology, Hanoi, Vietnam
| | - Hung Quang Le
- Endoscopy Center, Hanoi Medical University Hospital, Hanoi, Vietnam
| | - Long Van Dao
- Department of Internal Medicine, Hanoi Medical University, Hanoi, Vietnam
- Endoscopy Center, Hanoi Medical University Hospital, Hanoi, Vietnam
- Institute of Gastroenterology and Hepatology, Hanoi, Vietnam
| |
Collapse
|
25
|
Sui D, Liu W, Zhang Y, Li Y, Luo G, Wang K, Guo M. ColonNet: A novel polyp segmentation framework based on LK-RFB and GPPD. Comput Biol Med 2023; 166:107541. [PMID: 37804779 DOI: 10.1016/j.compbiomed.2023.107541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 09/11/2023] [Accepted: 09/28/2023] [Indexed: 10/09/2023]
Abstract
Colorectal cancer (CRC) holds the distinction of being the most prevalent malignant tumor affecting the digestive system. It is a formidable global health challenge, as it ranks as the fourth leading cause of cancer-related fatalities around the world. Despite considerable advancements in comprehending and addressing colorectal cancer (CRC), the likelihood of recurring tumors and metastasis remains a major cause of high morbidity and mortality rates during treatment. Currently, colonoscopy is the predominant method for CRC screening. Artificial intelligence has emerged as a promising tool in aiding the diagnosis of polyps, which have demonstrated significant potential. Unfortunately, most segmentation methods face challenges in terms of limited accuracy and generalization to different datasets, especially the slow processing and analysis speed has become a major obstacle. In this study, we propose a fast and efficient polyp segmentation framework based on the Large-Kernel Receptive Field Block (LK-RFB) and Global Parallel Partial Decoder(GPPD). Our proposed ColonNet has been extensively tested and proven effective, achieving a DICE coefficient of over 0.910 and an FPS of over 102 on the CVC-300 dataset. In comparison to the state-of-the-art (SOTA) methods, ColonNet outperforms or achieves comparable performance on five publicly available datasets, establishing a new SOTA. Compared to state-of-the-art methods, ColonNet achieves the highest FPS (over 102 FPS) while maintaining excellent segmentation results, achieving the best or comparable performance on the five public datasets. The code will be released at: https://github.com/SPECTRELWF/ColonNet.
Collapse
Affiliation(s)
- Dong Sui
- School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing, China.
| | - Weifeng Liu
- School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing, China
| | - Yue Zhang
- College of Computer Science and Technology, Harbin Engineering University, Harbin, China.
| | - Yang Li
- School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing, China
| | - Gongning Luo
- Perceptual Computing Research Center, Harbin Institute of Technology, Harbin, China
| | - Kuanquan Wang
- Perceptual Computing Research Center, Harbin Institute of Technology, Harbin, China
| | - Maozu Guo
- School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing, China
| |
Collapse
|
26
|
Sánchez-Peralta LF, Glover B, Saratxaga CL, Ortega-Morán JF, Nazarian S, Picón A, Pagador JB, Sánchez-Margallo FM. Clinical Validation Benchmark Dataset and Expert Performance Baseline for Colorectal Polyp Localization Methods. J Imaging 2023; 9:167. [PMID: 37754931 PMCID: PMC10532435 DOI: 10.3390/jimaging9090167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 08/18/2023] [Accepted: 08/18/2023] [Indexed: 09/28/2023] Open
Abstract
Colorectal cancer is one of the leading death causes worldwide, but, fortunately, early detection highly increases survival rates, with the adenoma detection rate being one surrogate marker for colonoscopy quality. Artificial intelligence and deep learning methods have been applied with great success to improve polyp detection and localization and, therefore, the adenoma detection rate. In this regard, a comparison with clinical experts is required to prove the added value of the systems. Nevertheless, there is no standardized comparison in a laboratory setting before their clinical validation. The ClinExpPICCOLO comprises 65 unedited endoscopic images that represent the clinical setting. They include white light imaging and narrow band imaging, with one third of the images containing a lesion but, differently to another public datasets, the lesion does not appear well-centered in the image. Together with the dataset, an expert clinical performance baseline has been established with the performance of 146 gastroenterologists, who were required to locate the lesions in the selected images. Results shows statistically significant differences between experience groups. Expert gastroenterologists' accuracy was 77.74, while sensitivity and specificity were 86.47 and 74.33, respectively. These values can be established as minimum values for a DL method before performing a clinical trial in the hospital setting.
Collapse
Affiliation(s)
- Luisa F. Sánchez-Peralta
- Jesús Usón Minimally Invasive Surgery Centre, E-10071 Cáceres, Spain; (L.F.S.-P.); (J.F.O.-M.); (F.M.S.-M.)
- AI4polypNET Thematic Network, E-08193 Barcelona, Spain
| | - Ben Glover
- Imperial College London, London SW7 2BU, UK; (B.G.); (S.N.)
| | - Cristina L. Saratxaga
- TECNALIA, Basque Research and Technology Alliance (BRTA), E-48160 Derio, Spain; (C.L.S.); (A.P.)
| | - Juan Francisco Ortega-Morán
- Jesús Usón Minimally Invasive Surgery Centre, E-10071 Cáceres, Spain; (L.F.S.-P.); (J.F.O.-M.); (F.M.S.-M.)
- AI4polypNET Thematic Network, E-08193 Barcelona, Spain
| | | | - Artzai Picón
- TECNALIA, Basque Research and Technology Alliance (BRTA), E-48160 Derio, Spain; (C.L.S.); (A.P.)
- Department of Automatic Control and Systems Engineering, University of the Basque Country, E-48013 Bilbao, Spain
| | - J. Blas Pagador
- Jesús Usón Minimally Invasive Surgery Centre, E-10071 Cáceres, Spain; (L.F.S.-P.); (J.F.O.-M.); (F.M.S.-M.)
- AI4polypNET Thematic Network, E-08193 Barcelona, Spain
| | - Francisco M. Sánchez-Margallo
- Jesús Usón Minimally Invasive Surgery Centre, E-10071 Cáceres, Spain; (L.F.S.-P.); (J.F.O.-M.); (F.M.S.-M.)
- AI4polypNET Thematic Network, E-08193 Barcelona, Spain
- RICORS-TERAV Network, ISCIII, E-28029 Madrid, Spain
- Centro de Investigación Biomédica en Red de Enfermedades Cardiovasculares (CIBERCV), Instituto de Salud Carlos III, E-28029 Madrid, Spain
| |
Collapse
|
27
|
Kim GH, Hwang YJ, Lee H, Sung ES, Nam KW. Convolutional neural network-based vocal cord tumor classification technique for home-based self-prescreening purpose. Biomed Eng Online 2023; 22:81. [PMID: 37596652 PMCID: PMC10439563 DOI: 10.1186/s12938-023-01139-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 07/20/2023] [Indexed: 08/20/2023] Open
Abstract
BACKGROUND In this study, we proposed a deep learning technique that can simultaneously detect suspicious positions of benign vocal cord tumors in laparoscopic images and classify the types of tumors into cysts, granulomas, leukoplakia, nodules and polyps. This technique is useful for simplified home-based self-prescreening purposes to detect the generation of tumors around the vocal cord early in the benign stage. RESULTS We implemented four convolutional neural network (CNN) models (two Mask R-CNNs, Yolo V4, and a single-shot detector) that were trained, validated and tested using 2183 laryngoscopic images. The experimental results demonstrated that among the four applied models, Yolo V4 showed the highest F1-score for all tumor types (0.7664, cyst; 0.9875, granuloma; 0.8214, leukoplakia; 0.8119, nodule; and 0.8271, polyp). The model with the lowest false-negative rate was different for each tumor type (Yolo V4 for cysts/granulomas and Mask R-CNN for leukoplakia/nodules/polyps). In addition, the embedded-operated Yolo V4 model showed an approximately equivalent F1-score (0.8529) to that of the computer-operated Yolo-4 model (0.8683). CONCLUSIONS Based on these results, we conclude that the proposed deep-learning-based home screening techniques have the potential to aid in the early detection of tumors around the vocal cord and can improve the long-term survival of patients with vocal cord tumors.
Collapse
Affiliation(s)
- Gun Ho Kim
- Medical Research Institute, Pusan National University, Yangsan, Korea
- Department of Biomedical Engineering, Pusan National University Yangsan Hospital, Yangsan, Korea
| | - Young Jun Hwang
- Department of Biomedical Engineering, School of Medicine, Pusan National University, 49, Busandaehak-Ro, Mulgeum-Eup, Yangsan, 50629, Korea
| | - Hongje Lee
- Department of Nuclear Medicine, Dongnam Institute of Radiological & Medical Sciences, Busan, Korea
| | - Eui-Suk Sung
- Department of Otolaryngology-Head and Neck Surgery, Pusan National University Yangsan Hospital, Yangsan, Korea.
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Pusan National University, Yangsan, Korea.
- Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Yangsan, Korea.
| | - Kyoung Won Nam
- Department of Biomedical Engineering, Pusan National University Yangsan Hospital, Yangsan, Korea.
- Department of Biomedical Engineering, School of Medicine, Pusan National University, 49, Busandaehak-Ro, Mulgeum-Eup, Yangsan, 50629, Korea.
- Research Institute for Convergence of Biomedical Science and Technology, Pusan National University Yangsan Hospital, Yangsan, Korea.
| |
Collapse
|
28
|
Jia X, Shkolyar E, Laurie MA, Eminaga O, Liao JC, Xing L. Tumor detection under cystoscopy with transformer-augmented deep learning algorithm. Phys Med Biol 2023; 68:10.1088/1361-6560/ace499. [PMID: 37548023 PMCID: PMC10697018 DOI: 10.1088/1361-6560/ace499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 07/05/2023] [Indexed: 08/08/2023]
Abstract
Objective.Accurate tumor detection is critical in cystoscopy to improve bladder cancer resection and decrease recurrence. Advanced deep learning algorithms hold the potential to improve the performance of standard white-light cystoscopy (WLC) in a noninvasive and cost-effective fashion. The purpose of this work is to develop a cost-effective, transformer-augmented deep learning algorithm for accurate detection of bladder tumors in WLC and to assess its performance on archived patient data.Approach.'CystoNet-T', a deep learning-based bladder tumor detector, was developed with a transformer-augmented pyramidal CNN architecture to improve automated tumor detection of WLC. CystoNet-T incorporated the self-attention mechanism by attaching transformer encoder modules to the pyramidal layers of the feature pyramid network (FPN), and obtained multi-scale activation maps with global features aggregation. Features resulting from context augmentation served as the input to a region-based detector to produce tumor detection predictions. The training set was constructed by 510 WLC frames that were obtained from cystoscopy video sequences acquired from 54 patients. The test set was constructed based on 101 images obtained from WLC sequences of 13 patients.Main results.CystoNet-T was evaluated on the test set with 96.4 F1 and 91.4 AP (Average Precision). This result improved the benchmark of Faster R-CNN and YOLO by 7.3 points in F1 and 3.8 points in AP. The improvement is attributed to the strong ability of global attention of CystoNet-T and better feature learning of the pyramids architecture throughout the training. The model was found to be particularly effective in highlighting the foreground information for precise localization of the true positives while favorably avoiding false alarmsSignificance.We have developed a deep learning algorithm that accurately detects bladder tumors in WLC. Transformer-augmented AI framework promises to aid in clinical decision-making for improved bladder cancer diagnosis and therapeutic guidance.
Collapse
Affiliation(s)
- Xiao Jia
- School of Control Science and Engineering, Shandong University, Jinan, People’s Republic of China
- Department of Radiation Oncology, Stanford University, Stanford, CA, United States of America
- Equal contribution
| | - Eugene Shkolyar
- Department of Urology, Stanford University, Stanford, CA, United States of America
- VA Palo Alto Health Care System, Palo Alto, CA, United States of America
- Equal contribution
| | - Mark A Laurie
- Department of Radiation Oncology, Stanford University, Stanford, CA, United States of America
- Department of Urology, Stanford University, Stanford, CA, United States of America
| | - Okyaz Eminaga
- Department of Urology, Stanford University, Stanford, CA, United States of America
- VA Palo Alto Health Care System, Palo Alto, CA, United States of America
| | - Joseph C Liao
- Department of Urology, Stanford University, Stanford, CA, United States of America
- VA Palo Alto Health Care System, Palo Alto, CA, United States of America
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, United States of America
| |
Collapse
|
29
|
Bian H, Jiang M, Qian J. The investigation of constraints in implementing robust AI colorectal polyp detection for sustainable healthcare system. PLoS One 2023; 18:e0288376. [PMID: 37437026 DOI: 10.1371/journal.pone.0288376] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 06/24/2023] [Indexed: 07/14/2023] Open
Abstract
Colorectal cancer (CRC) is one of the significant threats to public health and the sustainable healthcare system during urbanization. As the primary method of screening, colonoscopy can effectively detect polyps before they evolve into cancerous growths. However, the current visual inspection by endoscopists is insufficient in providing consistently reliable polyp detection for colonoscopy videos and images in CRC screening. Artificial Intelligent (AI) based object detection is considered as a potent solution to overcome visual inspection limitations and mitigate human errors in colonoscopy. This study implemented a YOLOv5 object detection model to investigate the performance of mainstream one-stage approaches in colorectal polyp detection. Meanwhile, a variety of training datasets and model structure configurations are employed to identify the determinative factors in practical applications. The designed experiments show that the model yields acceptable results assisted by transfer learning, and highlight that the primary constraint in implementing deep learning polyp detection comes from the scarcity of training data. The model performance was improved by 15.6% in terms of average precision (AP) when the original training dataset was expanded. Furthermore, the experimental results were analysed from a clinical perspective to identify potential causes of false positives. Besides, the quality management framework is proposed for future dataset preparation and model development in AI-driven polyp detection tasks for smart healthcare solutions.
Collapse
Affiliation(s)
- Haitao Bian
- College of Safety Science and Engineering, Nanjing Tech University, Nanjing, Jiangsu, China
| | - Min Jiang
- KLA Corporation, Milpitas, California, United States of America
| | - Jingjing Qian
- Department of Gastroenterology, The Second Hospital of Nanjing, Nanjing University of Chinese Medicine, Nanjing, Jiangsu, China
| |
Collapse
|
30
|
Dumitru RG, Peteleaza D, Craciun C. Using DUCK-Net for polyp image segmentation. Sci Rep 2023; 13:9803. [PMID: 37328572 PMCID: PMC10276013 DOI: 10.1038/s41598-023-36940-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 06/13/2023] [Indexed: 06/18/2023] Open
Abstract
This paper presents a novel supervised convolutional neural network architecture, "DUCK-Net", capable of effectively learning and generalizing from small amounts of medical images to perform accurate segmentation tasks. Our model utilizes an encoder-decoder structure with a residual downsampling mechanism and a custom convolutional block to capture and process image information at multiple resolutions in the encoder segment. We employ data augmentation techniques to enrich the training set, thus increasing our model's performance. While our architecture is versatile and applicable to various segmentation tasks, in this study, we demonstrate its capabilities specifically for polyp segmentation in colonoscopy images. We evaluate the performance of our method on several popular benchmark datasets for polyp segmentation, Kvasir-SEG, CVC-ClinicDB, CVC-ColonDB, and ETIS-LARIBPOLYPDB showing that it achieves state-of-the-art results in terms of mean Dice coefficient, Jaccard index, Precision, Recall, and Accuracy. Our approach demonstrates strong generalization capabilities, achieving excellent performance even with limited training data.
Collapse
|
31
|
Tran TN, Adler TJ, Yamlahi A, Christodoulou E, Godau P, Reinke A, Tizabi MD, Sauer P, Persicke T, Albert JG, Maier-Hein L. Sources of performance variability in deep learning-based polyp detection. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02936-9. [PMID: 37266886 PMCID: PMC10329574 DOI: 10.1007/s11548-023-02936-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 04/24/2023] [Indexed: 06/03/2023]
Abstract
PURPOSE Validation metrics are a key prerequisite for the reliable tracking of scientific progress and for deciding on the potential clinical translation of methods. While recent initiatives aim to develop comprehensive theoretical frameworks for understanding metric-related pitfalls in image analysis problems, there is a lack of experimental evidence on the concrete effects of common and rare pitfalls on specific applications. We address this gap in the literature in the context of colon cancer screening. METHODS Our contribution is twofold. Firstly, we present the winning solution of the Endoscopy Computer Vision Challenge on colon cancer detection, conducted in conjunction with the IEEE International Symposium on Biomedical Imaging 2022. Secondly, we demonstrate the sensitivity of commonly used metrics to a range of hyperparameters as well as the consequences of poor metric choices. RESULTS Based on comprehensive validation studies performed with patient data from six clinical centers, we found all commonly applied object detection metrics to be subject to high inter-center variability. Furthermore, our results clearly demonstrate that the adaptation of standard hyperparameters used in the computer vision community does not generally lead to the clinically most plausible results. Finally, we present localization criteria that correspond well to clinical relevance. CONCLUSION We conclude from our study that (1) performance results in polyp detection are highly sensitive to various design choices, (2) common metric configurations do not reflect the clinical need and rely on suboptimal hyperparameters and (3) comparison of performance across datasets can be largely misleading. Our work could be a first step towards reconsidering common validation strategies in deep learning-based colonoscopy and beyond.
Collapse
Affiliation(s)
- T N Tran
- Division of Intelligent Medical Systems, DKFZ, Heidelberg, Germany.
| | - T J Adler
- Division of Intelligent Medical Systems, DKFZ, Heidelberg, Germany
| | - A Yamlahi
- Division of Intelligent Medical Systems, DKFZ, Heidelberg, Germany
| | - E Christodoulou
- Division of Intelligent Medical Systems, DKFZ, Heidelberg, Germany
| | - P Godau
- Division of Intelligent Medical Systems, DKFZ, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, University of Heidelberg, Heidelberg, Germany
| | - A Reinke
- Division of Intelligent Medical Systems, DKFZ, Heidelberg, Germany
| | - M D Tizabi
- Division of Intelligent Medical Systems, DKFZ, Heidelberg, Germany
| | - P Sauer
- Interdisciplinary Endoscopy Center (IEZ), University Hospital Heidelberg, Heidelberg, Germany
| | - T Persicke
- Department of Gastroenterology, Hepatology and Endocrinology, Robert-Bosch Hospital (RBK), Stuttgart, Germany
| | - J G Albert
- Department of Gastroenterology, Hepatology and Endocrinology, Robert-Bosch Hospital (RBK), Stuttgart, Germany
- Clinic for General Internal Medicine, Gastroenterology, Hepatology and Infectiology, Pneumology, Klinikum Stuttgart, Stuttgart, Germany
| | - L Maier-Hein
- Division of Intelligent Medical Systems, DKFZ, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, University of Heidelberg, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a Partnership Between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| |
Collapse
|
32
|
Ghaleb Al-Mekhlafi Z, Mohammed Senan E, Sulaiman Alshudukhi J, Abdulkarem Mohammed B. Hybrid Techniques for Diagnosing Endoscopy Images for Early Detection of Gastrointestinal Disease Based on Fusion Features. INT J INTELL SYST 2023. [DOI: 10.1155/2023/8616939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
Gastrointestinal (GI) diseases, particularly tumours, are considered one of the most widespread and dangerous diseases and thus need timely health care for early detection to reduce deaths. Endoscopy technology is an effective technique for diagnosing GI diseases, thus producing a video containing thousands of frames. However, it is difficult to analyse all the images by a gastroenterologist, and it takes a long time to keep track of all the frames. Thus, artificial intelligence systems provide solutions to this challenge by analysing thousands of images with high speed and effective accuracy. Hence, systems with different methodologies are developed in this work. The first methodology for diagnosing endoscopy images of GI diseases is by using VGG-16 + SVM and DenseNet-121 + SVM. The second methodology for diagnosing endoscopy images of gastrointestinal diseases by artificial neural network (ANN) is based on fused features between VGG-16 and DenseNet-121 before and after high-dimensionality reduction by the principal component analysis (PCA). The third methodology is by ANN and is based on the fused features between VGG-16 and handcrafted features and features fused between DenseNet-121 and the handcrafted features. Herein, handcrafted features combine the features of gray level cooccurrence matrix (GLCM), discrete wavelet transform (DWT), fuzzy colour histogram (FCH), and local binary pattern (LBP) methods. All systems achieved promising results for diagnosing endoscopy images of the gastroenterology data set. The ANN network reached an accuracy, sensitivity, precision, specificity, and an AUC of 98.9%, 98.70%, 98.94%, 99.69%, and 99.51%, respectively, based on fused features of the VGG-16 and the handcrafted.
Collapse
Affiliation(s)
- Zeyad Ghaleb Al-Mekhlafi
- Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
| | - Jalawi Sulaiman Alshudukhi
- Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| | - Badiea Abdulkarem Mohammed
- Department of Computer Engineering, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| |
Collapse
|
33
|
Basu A, Senapati P, Deb M, Rai R, Dhal KG. A survey on recent trends in deep learning for nucleus segmentation from histopathology images. EVOLVING SYSTEMS 2023; 15:1-46. [PMID: 38625364 PMCID: PMC9987406 DOI: 10.1007/s12530-023-09491-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 02/13/2023] [Indexed: 03/08/2023]
Abstract
Nucleus segmentation is an imperative step in the qualitative study of imaging datasets, considered as an intricate task in histopathology image analysis. Segmenting a nucleus is an important part of diagnosing, staging, and grading cancer, but overlapping regions make it hard to separate and tell apart independent nuclei. Deep Learning is swiftly paving its way in the arena of nucleus segmentation, attracting quite a few researchers with its numerous published research articles indicating its efficacy in the field. This paper presents a systematic survey on nucleus segmentation using deep learning in the last five years (2017-2021), highlighting various segmentation models (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) and exploring their similarities, strengths, datasets utilized, and unfolding research areas.
Collapse
Affiliation(s)
- Anusua Basu
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Pradip Senapati
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Mainak Deb
- Wipro Technologies, Pune, Maharashtra India
| | - Rebika Rai
- Department of Computer Applications, Sikkim University, Sikkim, India
| | - Krishna Gopal Dhal
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| |
Collapse
|
34
|
Chadebecq F, Lovat LB, Stoyanov D. Artificial intelligence and automation in endoscopy and surgery. Nat Rev Gastroenterol Hepatol 2023; 20:171-182. [PMID: 36352158 DOI: 10.1038/s41575-022-00701-y] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/03/2022] [Indexed: 11/10/2022]
Abstract
Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient's anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery.
Collapse
Affiliation(s)
- François Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| |
Collapse
|
35
|
Katta MR, Kalluru PKR, Bavishi DA, Hameed M, Valisekka SS. Artificial intelligence in pancreatic cancer: diagnosis, limitations, and the future prospects-a narrative review. J Cancer Res Clin Oncol 2023:10.1007/s00432-023-04625-1. [PMID: 36739356 DOI: 10.1007/s00432-023-04625-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 01/27/2023] [Indexed: 02/06/2023]
Abstract
PURPOSE This review aims to explore the role of AI in the application of pancreatic cancer management and make recommendations to minimize the impact of the limitations to provide further benefits from AI use in the future. METHODS A comprehensive review of the literature was conducted using a combination of MeSH keywords, including "Artificial intelligence", "Pancreatic cancer", "Diagnosis", and "Limitations". RESULTS The beneficial implications of AI in the detection of biomarkers, diagnosis, and prognosis of pancreatic cancer have been explored. In addition, current drawbacks of AI use have been divided into subcategories encompassing statistical, training, and knowledge limitations; data handling, ethical and medicolegal aspects; and clinical integration and implementation. CONCLUSION Artificial intelligence (AI) refers to computational machine systems that accomplish a set of given tasks by imitating human intelligence in an exponential learning pattern. AI in gastrointestinal oncology has continued to provide significant advancements in the clinical, molecular, and radiological diagnosis and intervention techniques required to improve the prognosis of many gastrointestinal cancer types, particularly pancreatic cancer.
Collapse
Affiliation(s)
| | | | | | - Maha Hameed
- Clinical Research Department, King Faisal Specialist Hospital and Research Centre, Riyadh, Saudi Arabia.
| | | |
Collapse
|
36
|
Colon cancer stage detection in colonoscopy images using YOLOv3 MSF deep learning architecture. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
37
|
Houwen BBSL, Nass KJ, Vleugels JLA, Fockens P, Hazewinkel Y, Dekker E. Comprehensive review of publicly available colonoscopic imaging databases for artificial intelligence research: availability, accessibility, and usability. Gastrointest Endosc 2023; 97:184-199.e16. [PMID: 36084720 DOI: 10.1016/j.gie.2022.08.043] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/24/2022] [Accepted: 08/30/2022] [Indexed: 01/28/2023]
Abstract
BACKGROUND AND AIMS Publicly available databases containing colonoscopic imaging data are valuable resources for artificial intelligence (AI) research. Currently, little is known regarding the available number and content of these databases. This review aimed to describe the availability, accessibility, and usability of publicly available colonoscopic imaging databases, focusing on polyp detection, polyp characterization, and quality of colonoscopy. METHODS A systematic literature search was performed in MEDLINE and Embase to identify AI studies describing publicly available colonoscopic imaging databases published after 2010. Second, a targeted search using Google's Dataset Search, Google Search, GitHub, and Figshare was done to identify databases directly. Databases were included if they contained data about polyp detection, polyp characterization, or quality of colonoscopy. To assess accessibility of databases, the following categories were defined: open access, open access with barriers, and regulated access. To assess the potential usability of the included databases, essential details of each database were extracted using a checklist derived from the Checklist for Artificial Intelligence in Medical Imaging. RESULTS We identified 22 databases with open access, 3 databases with open access with barriers, and 15 databases with regulated access. The 22 open access databases contained 19,463 images and 952 videos. Nineteen of these databases focused on polyp detection, localization, and/or segmentation; 6 on polyp characterization, and 3 on quality of colonoscopy. Only half of these databases have been used by other researcher to develop, train, or benchmark their AI system. Although technical details were in general well reported, important details such as polyp and patient demographics and the annotation process were under-reported in almost all databases. CONCLUSIONS This review provides greater insight on public availability of colonoscopic imaging databases for AI research. Incomplete reporting of important details limits the ability of researchers to assess the usability of current databases.
Collapse
Affiliation(s)
- Britt B S L Houwen
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Karlijn J Nass
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Jasper L A Vleugels
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Yark Hazewinkel
- Department of Gastroenterology and Hepatology, Radboud University Nijmegen Medical Center, Radboud University of Nijmegen, Nijmegen, the Netherlands
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
38
|
González-Bueno Puyal J, Brandao P, Ahmad OF, Bhatia KK, Toth D, Kader R, Lovat L, Mountney P, Stoyanov D. Spatio-temporal classification for polyp diagnosis. BIOMEDICAL OPTICS EXPRESS 2023; 14:593-607. [PMID: 36874484 PMCID: PMC9979670 DOI: 10.1364/boe.473446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 11/25/2022] [Accepted: 12/06/2022] [Indexed: 06/18/2023]
Abstract
Colonoscopy remains the gold standard investigation for colorectal cancer screening as it offers the opportunity to both detect and resect pre-cancerous polyps. Computer-aided polyp characterisation can determine which polyps need polypectomy and recent deep learning-based approaches have shown promising results as clinical decision support tools. Yet polyp appearance during a procedure can vary, making automatic predictions unstable. In this paper, we investigate the use of spatio-temporal information to improve the performance of lesions classification as adenoma or non-adenoma. Two methods are implemented showing an increase in performance and robustness during extensive experiments both on internal and openly available benchmark datasets.
Collapse
Affiliation(s)
- Juana González-Bueno Puyal
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
- Odin Vision, London W1W 7TY, UK
| | | | - Omer F. Ahmad
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | | | | | - Rawen Kader
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | - Laurence Lovat
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | | | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| |
Collapse
|
39
|
A Deep-Learning Approach for Identifying and Classifying Digestive Diseases. Symmetry (Basel) 2023. [DOI: 10.3390/sym15020379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
The digestive tract, often known as the gastrointestinal (GI) tract or the gastrointestinal system, is affected by digestive ailments. The stomach, large and small intestines, liver, pancreas and gallbladder are all components of the digestive tract. A digestive disease is any illness that affects the digestive system. Serious to moderate conditions can exist. Heartburn, cancer, irritable bowel syndrome (IBS) and lactose intolerance are only a few of the frequent issues. The digestive system may be treated with many different surgical treatments. Laparoscopy, open surgery and endoscopy are a few examples of these techniques. This paper proposes transfer-learning models with different pre-trained models to identify and classify digestive diseases. The proposed systems showed an increase in metrics, such as the accuracy, precision and recall, when compared with other state-of-the-art methods, and EfficientNetB0 achieved the best performance results of 98.01% accuracy, 98% precision and 98% recall.
Collapse
|
40
|
Mansur A, Saleem Z, Elhakim T, Daye D. Role of artificial intelligence in risk prediction, prognostication, and therapy response assessment in colorectal cancer: current state and future directions. Front Oncol 2023; 13:1065402. [PMID: 36761957 PMCID: PMC9905815 DOI: 10.3389/fonc.2023.1065402] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
Artificial Intelligence (AI) is a branch of computer science that utilizes optimization, probabilistic and statistical approaches to analyze and make predictions based on a vast amount of data. In recent years, AI has revolutionized the field of oncology and spearheaded novel approaches in the management of various cancers, including colorectal cancer (CRC). Notably, the applications of AI to diagnose, prognosticate, and predict response to therapy in CRC, is gaining traction and proving to be promising. There have also been several advancements in AI technologies to help predict metastases in CRC and in Computer-Aided Detection (CAD) Systems to improve miss rates for colorectal neoplasia. This article provides a comprehensive review of the role of AI in predicting risk, prognosis, and response to therapies among patients with CRC.
Collapse
Affiliation(s)
- Arian Mansur
- Harvard Medical School, Boston, MA, United States
| | | | - Tarig Elhakim
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Dania Daye
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| |
Collapse
|
41
|
Sadagopan R, Ravi S, Adithya SV, Vivekanandhan S. PolyEffNetV1: A CNN based colorectal polyp detection in colonoscopy images. Proc Inst Mech Eng H 2023; 237:406-418. [PMID: 36683465 DOI: 10.1177/09544119221149233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
Presence of polyps is the root cause of colorectal cancer, hence identification of such polyps at an early stage can help in advance treatments to avoid complications to the patient. Since there are variations in the size and shape of polyps, the task of detecting them in colonoscopy images becomes challenging. Hence our work is to leverage an algorithm for segmentation and classification of the polyp of colonoscopy images using Deep learning algorithms. In this work, we propose PolypEffNetV1, a U-Net to segment the different pathologies present in the colonoscopy frame and EfficientNetB5 to classify the detected pathologies. The colonoscopy images for the segmentation process are taken from the open-source dataset KVASIR, it consists of 1000 images with "ground truth" labeling. For classification, combination of KVASIR and CVC datasets are incorporated, which consists of 1612 images with 1696 polyp regions and 760 non-polyp inflamed regions. The proposed PolypEffNetV1 produced testing accuracy of 97.1%, Jaccard index of 0.84, dice coefficient of 0.91, and F1-score of 0.89. Subsequently, for classification to evidence whether the segmented region is polyp or non-polyp inflammation, the developed classifier produced validation accuracy of 99%, specificity of 98%, and sensitivity of 99%. Hence the proposed system could be used by gastroenterologists to identify the presence of polyp in the colonoscopy images/videos which will in turn increase healthcare quality. These developed models can be either deployed on the edge of the device to enable real-time aidance or can be integrated with existing software-application for offline review and treatment planning.
Collapse
Affiliation(s)
- Rajkumar Sadagopan
- Department of Biomedical Engineering, Rajalakshmi Engineering College, Chennai, India.,Centre of Excellence in Medical Imaging, Rajalakshmi Engineering College, Chennai, India
| | - Saravanan Ravi
- Department of Biomedical Engineering, Rajalakshmi Engineering College, Chennai, India.,Centre of Excellence in Medical Imaging, Rajalakshmi Engineering College, Chennai, India
| | - Sairam Vuppala Adithya
- Department of Biomedical Engineering, Rajalakshmi Engineering College, Chennai, India.,Centre of Excellence in Medical Imaging, Rajalakshmi Engineering College, Chennai, India
| | - Sapthagirivasan Vivekanandhan
- Department of Biomedical Engineering, Rajalakshmi Engineering College, Chennai, India.,Medical and Life Sciences Department, Engineering R&D Division, IT Services Company, Bengaluru, India
| |
Collapse
|
42
|
Ali S. Where do we stand in AI for endoscopic image analysis? Deciphering gaps and future directions. NPJ Digit Med 2022; 5:184. [PMID: 36539473 PMCID: PMC9767933 DOI: 10.1038/s41746-022-00733-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 11/29/2022] [Indexed: 12/24/2022] Open
Abstract
Recent developments in deep learning have enabled data-driven algorithms that can reach human-level performance and beyond. The development and deployment of medical image analysis methods have several challenges, including data heterogeneity due to population diversity and different device manufacturers. In addition, more input from experts is required for a reliable method development process. While the exponential growth in clinical imaging data has enabled deep learning to flourish, data heterogeneity, multi-modality, and rare or inconspicuous disease cases still need to be explored. Endoscopy being highly operator-dependent with grim clinical outcomes in some disease cases, reliable and accurate automated system guidance can improve patient care. Most designed methods must be more generalisable to the unseen target data, patient population variability, and variable disease appearances. The paper reviews recent works on endoscopic image analysis with artificial intelligence (AI) and emphasises the current unmatched needs in this field. Finally, it outlines the future directions for clinically relevant complex AI solutions to improve patient outcomes.
Collapse
Affiliation(s)
- Sharib Ali
- School of Computing, University of Leeds, LS2 9JT, Leeds, UK.
| |
Collapse
|
43
|
Li MD, Huang ZR, Shan QY, Chen SL, Zhang N, Hu HT, Wang W. Performance and comparison of artificial intelligence and human experts in the detection and classification of colonic polyps. BMC Gastroenterol 2022; 22:517. [PMID: 36513975 PMCID: PMC9749329 DOI: 10.1186/s12876-022-02605-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 12/05/2022] [Indexed: 12/15/2022] Open
Abstract
OBJECTIVE The main aim of this study was to analyze the performance of different artificial intelligence (AI) models in endoscopic colonic polyp detection and classification and compare them with doctors with different experience. METHODS We searched the studies on Colonoscopy, Colonic Polyps, Artificial Intelligence, Machine Learning, and Deep Learning published before May 2020 in PubMed, EMBASE, Cochrane, and the citation index of the conference proceedings. The quality of studies was assessed using the QUADAS-2 table of diagnostic test quality evaluation criteria. The random-effects model was calculated using Meta-DISC 1.4 and RevMan 5.3. RESULTS A total of 16 studies were included for meta-analysis. Only one study (1/16) presented externally validated results. The area under the curve (AUC) of AI group, expert group and non-expert group for detection and classification of colonic polyps were 0.940, 0.918, and 0.871, respectively. AI group had slightly lower pooled specificity than the expert group (79% vs. 86%, P < 0.05), but the pooled sensitivity was higher than the expert group (88% vs. 80%, P < 0.05). While the non-experts had less pooled specificity in polyp recognition than the experts (81% vs. 86%, P < 0.05), and higher pooled sensitivity than the experts (85% vs. 80%, P < 0.05). CONCLUSION The performance of AI in polyp detection and classification is similar to that of human experts, with high sensitivity and moderate specificity. Different tasks may have an impact on the performance of deep learning models and human experts, especially in terms of sensitivity and specificity.
Collapse
Affiliation(s)
- Ming-De Li
- grid.412615.50000 0004 1803 6239Department of Medical Ultrasonics, Ultrasomics Artificial Intelligence X-Lab, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-Sen University, 58 Zhongshan Road 2, Guangzhou, 510080 People’s Republic of China
| | - Ze-Rong Huang
- grid.412615.50000 0004 1803 6239Department of Medical Ultrasonics, Ultrasomics Artificial Intelligence X-Lab, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-Sen University, 58 Zhongshan Road 2, Guangzhou, 510080 People’s Republic of China
| | - Quan-Yuan Shan
- grid.412615.50000 0004 1803 6239Department of Medical Ultrasonics, Ultrasomics Artificial Intelligence X-Lab, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-Sen University, 58 Zhongshan Road 2, Guangzhou, 510080 People’s Republic of China
| | - Shu-Ling Chen
- grid.412615.50000 0004 1803 6239Department of Medical Ultrasonics, Ultrasomics Artificial Intelligence X-Lab, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-Sen University, 58 Zhongshan Road 2, Guangzhou, 510080 People’s Republic of China
| | - Ning Zhang
- grid.412615.50000 0004 1803 6239Department of Gastroenterology, The First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Hang-Tong Hu
- grid.412615.50000 0004 1803 6239Department of Medical Ultrasonics, Ultrasomics Artificial Intelligence X-Lab, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-Sen University, 58 Zhongshan Road 2, Guangzhou, 510080 People’s Republic of China
| | - Wei Wang
- grid.412615.50000 0004 1803 6239Department of Medical Ultrasonics, Ultrasomics Artificial Intelligence X-Lab, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital of Sun Yat-Sen University, 58 Zhongshan Road 2, Guangzhou, 510080 People’s Republic of China
| |
Collapse
|
44
|
Tharwat M, Sakr NA, El-Sappagh S, Soliman H, Kwak KS, Elmogy M. Colon Cancer Diagnosis Based on Machine Learning and Deep Learning: Modalities and Analysis Techniques. SENSORS (BASEL, SWITZERLAND) 2022; 22:9250. [PMID: 36501951 PMCID: PMC9739266 DOI: 10.3390/s22239250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 11/24/2022] [Indexed: 06/17/2023]
Abstract
The treatment and diagnosis of colon cancer are considered to be social and economic challenges due to the high mortality rates. Every year, around the world, almost half a million people contract cancer, including colon cancer. Determining the grade of colon cancer mainly depends on analyzing the gland's structure by tissue region, which has led to the existence of various tests for screening that can be utilized to investigate polyp images and colorectal cancer. This article presents a comprehensive survey on the diagnosis of colon cancer. This covers many aspects related to colon cancer, such as its symptoms and grades as well as the available imaging modalities (particularly, histopathology images used for analysis) in addition to common diagnosis systems. Furthermore, the most widely used datasets and performance evaluation metrics are discussed. We provide a comprehensive review of the current studies on colon cancer, classified into deep-learning (DL) and machine-learning (ML) techniques, and we identify their main strengths and limitations. These techniques provide extensive support for identifying the early stages of cancer that lead to early treatment of the disease and produce a lower mortality rate compared with the rate produced after symptoms develop. In addition, these methods can help to prevent colorectal cancer from progressing through the removal of pre-malignant polyps, which can be achieved using screening tests to make the disease easier to diagnose. Finally, the existing challenges and future research directions that open the way for future work in this field are presented.
Collapse
Affiliation(s)
- Mai Tharwat
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Nehal A. Sakr
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Shaker El-Sappagh
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Benha 13512, Egypt
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
| | - Hassan Soliman
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Kyung-Sup Kwak
- Department of Information and Communication Engineering, Inha University, Incheon 22212, Republic of Korea
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| |
Collapse
|
45
|
Nisha JS, Gopi VARUNPALAKUZHIYIL. Colorectal polyp detection in colonoscopy videos using image enhancement and discrete orthonormal stockwell transform. SĀDHANĀ 2022; 47:234. [DOI: 10.1007/s12046-022-01970-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 08/01/2022] [Accepted: 08/09/2022] [Indexed: 04/01/2025]
|
46
|
Turan M, Durmus F. UC-NfNet: Deep learning-enabled assessment of ulcerative colitis from colonoscopy images. Med Image Anal 2022; 82:102587. [DOI: 10.1016/j.media.2022.102587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/12/2022] [Accepted: 08/17/2022] [Indexed: 10/31/2022]
|
47
|
González-Bueno Puyal J, Brandao P, Ahmad OF, Bhatia KK, Toth D, Kader R, Lovat L, Mountney P, Stoyanov D. Polyp detection on video colonoscopy using a hybrid 2D/3D CNN. Med Image Anal 2022; 82:102625. [PMID: 36209637 DOI: 10.1016/j.media.2022.102625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 08/22/2022] [Accepted: 09/10/2022] [Indexed: 12/15/2022]
Abstract
Colonoscopy is the gold standard for early diagnosis and pre-emptive treatment of colorectal cancer by detecting and removing colonic polyps. Deep learning approaches to polyp detection have shown potential for enhancing polyp detection rates. However, the majority of these systems are developed and evaluated on static images from colonoscopies, whilst in clinical practice the treatment is performed on a real-time video feed. Non-curated video data remains a challenge, as it contains low-quality frames when compared to still, selected images often obtained from diagnostic records. Nevertheless, it also embeds temporal information that can be exploited to increase predictions stability. A hybrid 2D/3D convolutional neural network architecture for polyp segmentation is presented in this paper. The network is used to improve polyp detection by encompassing spatial and temporal correlation of the predictions while preserving real-time detections. Extensive experiments show that the hybrid method outperforms a 2D baseline. The proposed architecture is validated on videos from 46 patients and on the publicly available SUN polyp database. A higher performance and increased generalisability indicate that real-world clinical implementations of automated polyp detection can benefit from the hybrid algorithm and the inclusion of temporal information.
Collapse
Affiliation(s)
- Juana González-Bueno Puyal
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, W1W 7TY, UK; Odin Vision, London, W1W 7TY, UK.
| | | | - Omer F Ahmad
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, W1W 7TY, UK
| | | | | | - Rawen Kader
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, W1W 7TY, UK
| | - Laurence Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, W1W 7TY, UK
| | | | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, W1W 7TY, UK
| |
Collapse
|
48
|
Nisha JS, Gopi VP, Palanisamy P. COLORECTAL POLYP DETECTION USING IMAGE ENHANCEMENT AND SCALED YOLOv4 ALGORITHM. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2022; 34. [DOI: 10.4015/s1016237222500260] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2025]
Abstract
Colorectal cancer (CRC) is the common cancer-related cause of death globally. It is now the third leading cause of cancer-related mortality worldwide. As the number of instances of colorectal polyps rises, it is more important than ever to identify and diagnose them early. Object detection models have recently become popular for extracting highly representative features. Colonoscopy is shown to be a useful diagnostic procedure for examining anomalies in the digestive system’s bottom half. This research presents a novel image-enhancing approach followed by a Scaled YOLOv4 Network for the early diagnosis of polyps, lowering the high risk of CRC therapy. The proposed network is trained using the CVC ClinicDB and the CVC ColonDB and the Etis Larib database are used for testing. On the CVC ColonDB database, the performance metrics are precision (95.13%), recall (74.92%), F1-score (83.19%), and F2-score (89.89%). On the ETIS Larib database, the performance metrics are precision (94.30%), recall (77.30%), F1-score (84.90%), and F2-score (80.20%). On both the databases, the suggested methodology outperforms the present one in terms of F1-score, F2-score, and precision compared to the futuristic method. The proposed Yolo object identification model provides an accurate polyp detection strategy in a real-time application.
Collapse
Affiliation(s)
- J. S. Nisha
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| | - Varun P. Gopi
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| | - P. Palanisamy
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| |
Collapse
|
49
|
Soons E, Rath T, Hazewinkel Y, van Dop WA, Esposito D, Testoni PA, Siersema PD. Real-time colorectal polyp detection using a novel computer-aided detection system (CADe): a feasibility study. Int J Colorectal Dis 2022; 37:2219-2228. [PMID: 36163514 PMCID: PMC9560918 DOI: 10.1007/s00384-022-04258-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/18/2022] [Indexed: 02/04/2023]
Abstract
BACKGROUND AND AIMS Colonoscopy aims to early detect and remove precancerous colorectal polyps, thereby preventing development of colorectal cancer (CRC). Recently, computer-aided detection (CADe) systems have been developed to assist endoscopists in polyp detection during colonoscopy. The aim of this study was to investigate feasibility and safety of a novel CADe system during real-time colonoscopy in three European tertiary referral centers. METHODS Ninety patients undergoing colonoscopy assisted by a real-time CADe system (DISCOVERY; Pentax Medical, Tokyo, Japan) were prospectively included. The CADe system was turned on only at withdrawal, and its output was displayed on secondary monitor. To study feasibility, inspection time, polyp detection rate (PDR), adenoma detection rate (ADR), sessile serrated lesion (SSL) detection rate (SDR), and the number of false positives were recorded. To study safety, (severe) adverse events ((S)AEs) were collected. Additionally, user friendliness was rated from 1 (worst) to 10 (best) by endoscopists. RESULTS Mean inspection time was 10.8 ± 4.3 min, while PDR was 55.6%, ADR 28.9%, and SDR 11.1%. The CADe system users estimated that < 20 false positives occurred in 81 colonoscopy procedures (90%). No (S)AEs related to the CADe system were observed during the 30-day follow-up period. User friendliness was rated as good, with a median score of 8/10. CONCLUSION Colonoscopy with this novel CADe system in a real-time setting was feasible and safe. Although PDR and SDR were high compared to previous studies with other CADe systems, future randomized controlled trials are needed to confirm these detection rates. The high SDR is of particular interest since interval CRC has been suggested to develop frequently through the serrated neoplasia pathway. CLINICAL TRIAL REGISTRATION The study was registered in the Dutch Trial Register (reference number: NL8788).
Collapse
Affiliation(s)
- E Soons
- Department of Gastroenterology and Hepatology, Radboud Institute for Health Sciences, Radboud University Medical Center, 9101, 6500 HB, Nijmegen, the Netherlands.
| | - T Rath
- Department of Internal Medicine 1, Division of Gastroenterology, Friedrich-Alexander-University, Ludwig Demling Endoscopy Center of Excellence, Erlangen Nuernberg, Germany
| | - Y Hazewinkel
- Department of Gastroenterology and Hepatology, Radboud Institute for Health Sciences, Radboud University Medical Center, 9101, 6500 HB, Nijmegen, the Netherlands
| | - W A van Dop
- Department of Gastroenterology and Hepatology, Radboud Institute for Health Sciences, Radboud University Medical Center, 9101, 6500 HB, Nijmegen, the Netherlands
| | - D Esposito
- Gastroenterology and Gastrointestinal Endoscopy Unit, Vita-Salute San Raffaele University, Scientific Institute San Raffaele, Milan, Italy
| | - P A Testoni
- Gastroenterology and Gastrointestinal Endoscopy Unit, Vita-Salute San Raffaele University, Scientific Institute San Raffaele, Milan, Italy
| | - P D Siersema
- Department of Gastroenterology and Hepatology, Radboud Institute for Health Sciences, Radboud University Medical Center, 9101, 6500 HB, Nijmegen, the Netherlands
| |
Collapse
|
50
|
Rao HB, Sastry NB, Venu RP, Pattanayak P. The role of artificial intelligence based systems for cost optimization in colorectal cancer prevention programs. Front Artif Intell 2022; 5:955399. [PMID: 36248620 PMCID: PMC9563712 DOI: 10.3389/frai.2022.955399] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Accepted: 08/16/2022] [Indexed: 11/13/2022] Open
Abstract
Colorectal Cancer (CRC) has seen a dramatic increase in incidence globally. In 2019, colorectal cancer accounted for 1.15 million deaths and 24.28 million disability-adjusted life-years (DALYs) worldwide. In India, the annual incidence rates (AARs) for colon cancer was 4.4 per 100,000. There has been a steady rise in the prevalence of CRC in India which may be attributed to urbanization, mass migration of population, westernization of diet and lifestyle practices and a rise of obesity and metabolic risk factors that place the population at a higher risk of CRC. Moreoever, CRC in India differs from that described in the Western countries, with a higher proportion of young patients and more patients presenting with an advanced stage. This may be due to poor access to specialized healthcare and socio-economic factors. Early identification of adenomatous colonic polyps, which are well-recognized pre-cancerous lesions, at the time of screening colonoscopy has been shown to be the most effective measure used for CRC prevention. However, colonic polyps are frequently missed during colonoscopy and moreover, these screening programs necessitate man-power, time and resources for processing resected polyps, that may hamper penetration and efficacy in mid- to low-income countries. In the last decade, there has been significant progress made in the automatic detection of colonic polyps by multiple AI-based systems. With the advent of better AI methodology, the focus has shifted from mere detection to accurate discrimination and diagnosis of colonic polyps. These systems, once validated, could usher in a new era in Colorectal Cancer (CRC) prevention programs which would center around "Leave in-situ" and "Resect and discard" strategies. These new strategies hinge around the specificity and accuracy of AI based systems in correctly identifying the pathological diagnosis of the polyps, thereby providing the endoscopist with real-time information in order to make a clinical decision of either leaving the lesion in-situ (mucosal polyps) or resecting and discarding the polyp (hyperplastic polyps). The major advantage of employing these strategies would be in cost optimization of CRC prevention programs while ensuring good clinical outcomes. The adoption of these AI-based systems in the national cancer prevention program of India in accordance with the mandate to increase technology integration could prove to be cost-effective and enable implementation of CRC prevention programs at the population level. This level of penetration could potentially reduce the incidence of CRC and improve patient survival by enabling early diagnosis and treatment. In this review, we will highlight key advancements made in the field of AI in the identification of polyps during colonoscopy and explore the role of AI based systems in cost optimization during the universal implementation of CRC prevention programs in the context of mid-income countries like India.
Collapse
Affiliation(s)
- Harshavardhan B. Rao
- Department of Gastroenterology, M.S. Ramaiah Medical College, Ramaiah University of Applied Sciences, Bangalore, Karnataka, India
| | - Nandakumar Bidare Sastry
- Department of Gastroenterology, M.S. Ramaiah Medical College, Ramaiah University of Applied Sciences, Bangalore, Karnataka, India
| | - Rama P. Venu
- Department of Gastroenterology, Amrita Institute of Medical Sciences and Research Centre, Kochi, Kerala, India
| | - Preetiparna Pattanayak
- Department of Gastroenterology, M.S. Ramaiah Medical College, Ramaiah University of Applied Sciences, Bangalore, Karnataka, India
| |
Collapse
|