1
|
Huang Y, Li S, Rubab SS, Bao J, Hu C, Hong J, Ren X, Liu X, Zhang L, Huang J, Gan H, Zhou X, Cao J, Fang D, Shi Z, Wang H, Mei Q. Artificial intelligence alert system based on intraluminal view for colonoscopy intubation. Sci Rep 2025; 15:14927. [PMID: 40295756 PMCID: PMC12037750 DOI: 10.1038/s41598-025-99725-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2024] [Accepted: 04/22/2025] [Indexed: 04/30/2025] Open
Abstract
Mucosal contact of the tip of colonoscopy causes red-out views, and more pressure may result in perforation. There is still a lack of quantitative analysis methods for red-out views. We aimed to develop an artificial intelligence (AI)-based system to assess red-out views during intubation in colonoscopy. Altogether, 479 colonoscopies performed by 34 colonoscopists were analysed using the proposed semi-supervised AI-based system. We compared the AI-based red-out avoiding scores among novice, intermediate, and experienced colonoscopists. The mean AI-based red-out avoiding scores were compared among groups stratified by expert-rated direct observation of procedure or skill (DOPS)-based tip control assessment results. Both the percentage of actual red-out views (p < 0.001) and AI-based red-out avoiding scores (p < 0.001) were significantly different among the novice, intermediate, and experienced groups. Colonoscopists who scored better on the DOPS-based tip control assessment also performed better on the AI-based red-out avoiding skill assessment. AI-based red-out avoiding score was negatively correlated with actual caecal intubation time and actual red-out percentage. Feedback of red-out avoiding score may help remind endoscopists to perform colonoscopy in an effective and safe manner. This system can be used as an auxiliary tool for colonoscopy training.
Collapse
Affiliation(s)
- Yigeng Huang
- State Key Laboratory of Transducer Technology, Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, China
- University of Science and Technology of China, Hefei, 230026, China
| | - Suwen Li
- Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, 230022, China
- Key Laboratory of Gastroenterology of Anhui Province, Hefei, 230022, China
| | - Syeda Sadia Rubab
- State Key Laboratory of Transducer Technology, Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, China
- University of Science and Technology of China, Hefei, 230026, China
| | - Junjun Bao
- Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, 230022, China
- Key Laboratory of Gastroenterology of Anhui Province, Hefei, 230022, China
| | - Cui Hu
- Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, 230022, China
- Key Laboratory of Gastroenterology of Anhui Province, Hefei, 230022, China
| | - Jianglong Hong
- Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, 230022, China
- Key Laboratory of Gastroenterology of Anhui Province, Hefei, 230022, China
| | - Xiaofei Ren
- Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, 230022, China
- Key Laboratory of Gastroenterology of Anhui Province, Hefei, 230022, China
| | - Xiaochang Liu
- Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, 230022, China
- Key Laboratory of Gastroenterology of Anhui Province, Hefei, 230022, China
| | - Lixiang Zhang
- Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, 230022, China
- Key Laboratory of Gastroenterology of Anhui Province, Hefei, 230022, China
| | - Jian Huang
- Department of Gastroenterology, First People's Hospital of Hefei, Hefei, 230061, China
| | - Huizhong Gan
- Department of Gastroenterology, First People's Hospital of Hefei, Hefei, 230061, China
| | - Xiaolan Zhou
- Department of Gastroenterology, The Suzhou Affiliated Hospital of Anhui Medical University, Suzhou, 234099, China
| | - Jie Cao
- Department of Gastroenterology, The Suzhou Affiliated Hospital of Anhui Medical University, Suzhou, 234099, China
| | - Dong Fang
- Department of Gastroenterology, Second People's Hospital of Hefei, Hefei, 230012, China
| | - Zhenwang Shi
- Department of Gastroenterology, Second People's Hospital of Hefei, Hefei, 230012, China
| | - Huanqin Wang
- State Key Laboratory of Transducer Technology, Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, China.
- University of Science and Technology of China, Hefei, 230026, China.
| | - Qiao Mei
- Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, 230022, China.
- Key Laboratory of Gastroenterology of Anhui Province, Hefei, 230022, China.
| |
Collapse
|
2
|
Aggarwal S, Gupta I, Kumar A, Kautish S, Almazyad AS, Mohamed AW, Werner F, Shokouhifar M. GastroFuse-Net: an ensemble deep learning framework designed for gastrointestinal abnormality detection in endoscopic images. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:6847-6869. [PMID: 39483096 DOI: 10.3934/mbe.2024300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/03/2024]
Abstract
Convolutional Neural Networks (CNNs) have received substantial attention as a highly effective tool for analyzing medical images, notably in interpreting endoscopic images, due to their capacity to provide results equivalent to or exceeding those of medical specialists. This capability is particularly crucial in the realm of gastrointestinal disorders, where even experienced gastroenterologists find the automatic diagnosis of such conditions using endoscopic pictures to be a challenging endeavor. Currently, gastrointestinal findings in medical diagnosis are primarily determined by manual inspection by competent gastrointestinal endoscopists. This evaluation procedure is labor-intensive, time-consuming, and frequently results in high variability between laboratories. To address these challenges, we introduced a specialized CNN-based architecture called GastroFuse-Net, designed to recognize human gastrointestinal diseases from endoscopic images. GastroFuse-Net was developed by combining features extracted from two different CNN models with different numbers of layers, integrating shallow and deep representations to capture diverse aspects of the abnormalities. The Kvasir dataset was used to thoroughly test the proposed deep learning model. This dataset contained images that were classified according to structures (cecum, z-line, pylorus), diseases (ulcerative colitis, esophagitis, polyps), or surgical operations (dyed resection margins, dyed lifted polyps). The proposed model was evaluated using various measures, including specificity, recall, precision, F1-score, Mathew's Correlation Coefficient (MCC), and accuracy. The proposed model GastroFuse-Net exhibited exceptional performance, achieving a precision of 0.985, recall of 0.985, specificity of 0.984, F1-score of 0.997, MCC of 0.982, and an accuracy of 98.5%.
Collapse
Affiliation(s)
- Sonam Aggarwal
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Isha Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Ashok Kumar
- Model Institute of Engineering and Technology, Jammu, J&K, India
| | | | - Abdulaziz S Almazyad
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia
| | - Ali Wagdy Mohamed
- Operations Research Department, Faculty of Graduate Studies for Statistical Research, Cairo University, Giza 12613, Egypt
- Applied Science Research Center, Applied Science Private University, Amman 11931, Jordan
| | - Frank Werner
- Faculty of Mathematics, Otto-von-Guericke University, Magdeburg 39016, Germany
| | - Mohammad Shokouhifar
- Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam
| |
Collapse
|
3
|
Ayana G, Barki H, Choe SW. Pathological Insights: Enhanced Vision Transformers for the Early Detection of Colorectal Cancer. Cancers (Basel) 2024; 16:1441. [PMID: 38611117 PMCID: PMC11010958 DOI: 10.3390/cancers16071441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 04/02/2024] [Accepted: 04/04/2024] [Indexed: 04/14/2024] Open
Abstract
Endoscopic pathological findings of the gastrointestinal tract are crucial for the early diagnosis of colorectal cancer (CRC). Previous deep learning works, aimed at improving CRC detection performance and reducing subjective analysis errors, are limited to polyp segmentation. Pathological findings were not considered and only convolutional neural networks (CNNs), which are not able to handle global image feature information, were utilized. This work introduces a novel vision transformer (ViT)-based approach for early CRC detection. The core components of the proposed approach are ViTCol, a boosted vision transformer for classifying endoscopic pathological findings, and PUTS, a vision transformer-based model for polyp segmentation. Results demonstrate the superiority of this vision transformer-based CRC detection method over existing CNN and vision transformer models. ViTCol exhibited an outstanding performance in classifying pathological findings, with an area under the receiver operating curve (AUC) value of 0.9999 ± 0.001 on the Kvasir dataset. PUTS provided outstanding results in segmenting polyp images, with mean intersection over union (mIoU) of 0.8673 and 0.9092 on the Kvasir-SEG and CVC-Clinic datasets, respectively. This work underscores the value of spatial transformers in localizing input images, which can seamlessly integrate into the main vision transformer network, enhancing the automated identification of critical image features for early CRC detection.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea;
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Hika Barki
- Department of Artificial Intelligence Convergence, Pukyong National University, Busan 48513, Republic of Korea;
| | - Se-woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea;
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- Emerging Pathogens Institute, University of Florida, Gainesville, FL 32608, USA
| |
Collapse
|
4
|
Khan Z, Tahir MA. Real time anatomical landmarks and abnormalities detection in gastrointestinal tract. PeerJ Comput Sci 2023; 9:e1685. [PMID: 38192480 PMCID: PMC10773696 DOI: 10.7717/peerj-cs.1685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 10/16/2023] [Indexed: 01/10/2024]
Abstract
Gastrointestinal (GI) endoscopy is an active research field due to the lethal cancer diseases in the GI tract. Cancer treatments result better if diagnosed early and it increases the survival chances. There is a high miss rate in the detection of the abnormalities in the GI tract during endoscopy or colonoscopy due to the lack of attentiveness, tiring procedures, or the lack of required training. The procedure of the detection can be automated to the reduction of the risks by identifying and flagging the suspicious frames. A suspicious frame may have some of the abnormality or the information about anatomical landmark in the frame. The frame then can be analysed for the anatomical landmarks and the abnormalities for the detection of disease. In this research, a real-time endoscopic abnormalities detection system is presented that detects the abnormalities and the landmarks. The proposed system is based on a combination of handcrafted and deep features. Deep features are extracted from lightweight MobileNet convolutional neural network (CNN) architecture. There are some of the classes with a small inter-class difference and a higher intra-class differences, for such classes the same detection threshold is unable to distinguish. The threshold of such classes is learned from the training data using genetic algorithm. The system is evaluated on various benchmark datasets and resulted in an accuracy of 0.99 with the F1-score of 0.91 and Matthews correlation coefficient (MCC) of 0.91 on Kvasir datasets and F1-score of 0.93 on the dataset of DowPK. The system detects abnormalities in real-time with the detection speed of 41 frames per second.
Collapse
Affiliation(s)
- Zeshan Khan
- FAST School of Computing, National University of Computer and Emerging Sciences, Islamabad, Karachi, Sindh, Pakistan
| | - Muhammad Atif Tahir
- FAST School of Computing, National University of Computer and Emerging Sciences, Islamabad, Karachi, Sindh, Pakistan
| |
Collapse
|
5
|
Chadebecq F, Lovat LB, Stoyanov D. Artificial intelligence and automation in endoscopy and surgery. Nat Rev Gastroenterol Hepatol 2023; 20:171-182. [PMID: 36352158 DOI: 10.1038/s41575-022-00701-y] [Citation(s) in RCA: 37] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/03/2022] [Indexed: 11/10/2022]
Abstract
Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient's anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery.
Collapse
Affiliation(s)
- François Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| |
Collapse
|
6
|
A deep ensemble learning method for colorectal polyp classification with optimized network parameters. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03689-9] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractColorectal Cancer (CRC), a leading cause of cancer-related deaths, can be abated by timely polypectomy. Computer-aided classification of polyps helps endoscopists to resect timely without submitting the sample for histology. Deep learning-based algorithms are promoted for computer-aided colorectal polyp classification. However, the existing methods do not accommodate any information on hyperparametric settings essential for model optimisation. Furthermore, unlike the polyp types, i.e., hyperplastic and adenomatous, the third type, serrated adenoma, is difficult to classify due to its hybrid nature. Moreover, automated assessment of polyps is a challenging task due to the similarities in their patterns; therefore, the strength of individual weak learners is combined to form a weighted ensemble model for an accurate classification model by establishing the optimised hyperparameters. In contrast to existing studies on binary classification, multiclass classification require evaluation through advanced measures. This study compared six existing Convolutional Neural Networks in addition to transfer learning and opted for optimum performing architecture only for ensemble models. The performance evaluation on UCI and PICCOLO dataset of the proposed method in terms of accuracy (96.3%, 81.2%), precision (95.5%, 82.4%), recall (97.2%, 81.1%), F1-score (96.3%, 81.3%) and model reliability using Cohen’s Kappa Coefficient (0.94, 0.62) shows the superiority over existing models. The outcomes of experiments by other studies on the same dataset yielded 82.5% accuracy with 72.7% recall by SVM and 85.9% accuracy with 87.6% recall by other deep learning methods. The proposed method demonstrates that a weighted ensemble of optimised networks along with data augmentation significantly boosts the performance of deep learning-based CAD.
Collapse
|
7
|
Yen HH, Wu PY, Wu TL, Huang SP, Chen YY, Chen MF, Lin WC, Tsai CL, Lin KP. Forrest Classification for Bleeding Peptic Ulcer: A New Look at the Old Endoscopic Classification. Diagnostics (Basel) 2022; 12:diagnostics12051066. [PMID: 35626222 PMCID: PMC9139956 DOI: 10.3390/diagnostics12051066] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 04/20/2022] [Accepted: 04/20/2022] [Indexed: 12/10/2022] Open
Abstract
The management of peptic ulcer bleeding is clinically challenging. For decades, the Forrest classification has been used for risk stratification for nonvariceal ulcer bleeding. The perception and interpretation of the Forrest classification vary among different endoscopists. The relationship between the bleeder and ulcer images and the different stages of the Forrest classification has not been studied yet. Endoscopic still images of 276 patients with peptic ulcer bleeding for the past 3 years were retrieved and reviewed. The intra-rater agreement and inter-rater agreement were compared. The obtained endoscopic images were manually drawn to delineate the extent of the ulcer and bleeding area. The areas of the region of interest were compared between the different stages of the Forrest classification. A total of 276 images were first classified by two experienced tutor endoscopists. The images were reviewed by six other endoscopists. A good intra-rater correlation was observed (0.92–0.98). A good inter-rater correlation was observed among the different levels of experience (0.639–0.859). The correlation was higher among tutor and junior endoscopists than among experienced endoscopists. Low-risk Forrest IIC and III lesions show distinct patterns compared to high-risk Forrest I, IIA, or IIB lesions. We found good agreement of the Forrest classification among different endoscopists in a single institution. This is the first study to quantitively analyze the obtained and explain the distinct patterns of bleeding ulcers from endoscopy images.
Collapse
Affiliation(s)
- Hsu-Heng Yen
- Department of Internal Medicine, Division of Gastroenterology, Changhua Christian Hospital, Changhua 500209, Taiwan; (H.-H.Y.); (T.-L.W.); (S.-P.H.); (Y.-Y.C.)
- General Education Center, Chienkuo Technology University, Changhua 500020, Taiwan
- Department of Electrical Engineering, Chung Yuan Christian University, Taoyuan 320314, Taiwan; (P.-Y.W.); (M.-F.C.)
- Department of Post-Baccalaureate Medicine, College of Medicine, National Chung Hsing University, Taichung 400, Taiwan
| | - Ping-Yu Wu
- Department of Electrical Engineering, Chung Yuan Christian University, Taoyuan 320314, Taiwan; (P.-Y.W.); (M.-F.C.)
| | - Tung-Lung Wu
- Department of Internal Medicine, Division of Gastroenterology, Changhua Christian Hospital, Changhua 500209, Taiwan; (H.-H.Y.); (T.-L.W.); (S.-P.H.); (Y.-Y.C.)
| | - Siou-Ping Huang
- Department of Internal Medicine, Division of Gastroenterology, Changhua Christian Hospital, Changhua 500209, Taiwan; (H.-H.Y.); (T.-L.W.); (S.-P.H.); (Y.-Y.C.)
| | - Yang-Yuan Chen
- Department of Internal Medicine, Division of Gastroenterology, Changhua Christian Hospital, Changhua 500209, Taiwan; (H.-H.Y.); (T.-L.W.); (S.-P.H.); (Y.-Y.C.)
| | - Mei-Fen Chen
- Department of Electrical Engineering, Chung Yuan Christian University, Taoyuan 320314, Taiwan; (P.-Y.W.); (M.-F.C.)
- Technology Translation Center for Medical Device, Chung Yuan Christian University, Taoyuan 320314, Taiwan; (W.-C.L.); (C.-L.T.)
| | - Wen-Chen Lin
- Technology Translation Center for Medical Device, Chung Yuan Christian University, Taoyuan 320314, Taiwan; (W.-C.L.); (C.-L.T.)
| | - Cheng-Lun Tsai
- Technology Translation Center for Medical Device, Chung Yuan Christian University, Taoyuan 320314, Taiwan; (W.-C.L.); (C.-L.T.)
- Department of Biomedical Engineering, Chung Yuan Christian University, Taoyuan 320314, Taiwan
| | - Kang-Ping Lin
- Department of Electrical Engineering, Chung Yuan Christian University, Taoyuan 320314, Taiwan; (P.-Y.W.); (M.-F.C.)
- Technology Translation Center for Medical Device, Chung Yuan Christian University, Taoyuan 320314, Taiwan; (W.-C.L.); (C.-L.T.)
- Correspondence:
| |
Collapse
|
8
|
Alam MJ, Rashid RB, Fattah SA, Saquib M. RAt-CapsNet: A Deep Learning Network Utilizing Attention and Regional Information for Abnormality Detection in Wireless Capsule Endoscopy. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 10:3300108. [PMID: 36032311 PMCID: PMC9401095 DOI: 10.1109/jtehm.2022.3198819] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 07/03/2022] [Accepted: 08/10/2022] [Indexed: 11/05/2022]
Affiliation(s)
- Md. Jahin Alam
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh
| | - Rifat Bin Rashid
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh
| | - Shaikh Anowarul Fattah
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh
| | - Mohammad Saquib
- Department of Electrical Engineering, The University of Texas at Dallas, Richardson, TX, USA
| |
Collapse
|