1
|
Mc Entee PD, Boland PA, Cahill RA. AUGUR-AIM: Clinical validation of an artificial intelligence indocyanine green fluorescence angiography expert representer. Colorectal Dis 2025; 27:e70097. [PMID: 40230324 PMCID: PMC11997639 DOI: 10.1111/codi.70097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2024] [Revised: 02/19/2025] [Accepted: 03/24/2025] [Indexed: 04/16/2025]
Abstract
AIM Recent randomized controlled trials and meta-analyses have demonstrated a reduction in the anastomotic leak rate when indocyanine green fluorescence angiography (ICGFA) is used versus when it is not in colorectal resections. We have previously demonstrated that an artificial intelligence (AI) model, AUGUR-AI, can digitally represent in real time where experienced ICGFA users would place their surgical stapler based on their interpretation of the fluorescence imagery. The aim of this study, called AUGUR-AIM, is to validate this method across multiple clinical sites with regard to generalizability, usability and accuracy while generating new algorithms for testing and determining the optimal mode of deployment for the software device. METHOD This is a prospective, observational, multicentre European study involving patients undergoing resectional colorectal surgery with ICGFA as part of their standard clinical care enrolled over a 1-year period. Video recordings of the ICGFA imagery will be computationally analysed both in real time and post hoc by AUGUR-AI, with the operating surgeon blinded to the results, testing developed algorithms iteratively versus the actual surgeon's ICGFA interpretation. AI-based interpretation of the fluorescence signal will be compared with the actual transection site selected by the operating surgeon and usability optimized. CONCLUSION AUGUR-AIM will validate the use of AUGUR-AI to interpret ICGFA imagery in real time to the level of an expert ICGFA user, building on our previous work to include a larger, more diverse patient and surgeon population. This could allow future progression to develop the AI model into a usable clinical tool that could provide decision support, including to new/infrequent ICGFA users, and documentary support of the decision made by experienced users.
Collapse
Affiliation(s)
- Philip D. Mc Entee
- UCD Centre for Precision SurgeryUCDDublinIreland
- Department of SurgeryMater Misericordiae University HospitalDublinIreland
| | - Patrick A. Boland
- UCD Centre for Precision SurgeryUCDDublinIreland
- Department of SurgeryMater Misericordiae University HospitalDublinIreland
| | - Ronan A. Cahill
- UCD Centre for Precision SurgeryUCDDublinIreland
- Department of SurgeryMater Misericordiae University HospitalDublinIreland
| |
Collapse
|
2
|
Old O, Jankowski J, Attwood S, Stokes C, Kendall C, Rasdell C, Zimmermann A, Massa MS, Love S, Sanders S, Deidda M, Briggs A, Hapeshi J, Foy C, Moayyedi P, Barr H. Barrett's Oesophagus Surveillance Versus Endoscopy at Need Study (BOSS): A Randomized Controlled Trial. Gastroenterology 2025:S0016-5085(25)00587-6. [PMID: 40180292 DOI: 10.1053/j.gastro.2025.03.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/29/2024] [Revised: 01/27/2025] [Accepted: 03/07/2025] [Indexed: 04/05/2025]
Abstract
BACKGROUND & AIMS Barrett's esophagus (BE) is a precursor lesion for esophageal adenocarcinoma (EAC). Surveillance endoscopy aims to detect early malignant progression; although widely practiced, it has not previously been tested in a randomized trial. METHODS BOSS (Barrett's Oesophagus Surveillance Versus Endoscopy at Need Study) was a randomized controlled trial at 109 centers in the United Kingdom. Patients with BE were randomized to 2-yearly surveillance endoscopy or "at-need" endoscopy, offered for symptoms only. Follow-up was a minimum of 10 years. The primary outcome was overall survival in the intention-to-treat population. Secondary outcomes included cancer-specific survival, time to diagnosis of EAC, stage of EAC at diagnosis, frequency of endoscopy, and serious adverse events related to interventions. RESULTS There were 3453 patients recruited; 1733 patients were randomized to surveillance and 1719 to at-need endoscopy. Median follow-up time was 12.8 years for the primary outcome. There was no evidence of a difference in overall survival between the surveillance arm (333 deaths among 1733 patients) and the at-need arm (356 deaths among 1719 patients; hazard ratio, 0.95; 95% CI, 0.82-1.10; stratified log-rank P = .503). There was no evidence of a difference for surveillance vs at-need endoscopy in cancer-specific survival (108 vs 106 deaths from any cancer; hazard ratio, 1.01; 95% CI, 0.77-1.33; P = .926), time to diagnosis of EAC (40 vs 31 patients had a diagnosis of EAC; hazard ratio, 1.32; 95% CI, 0.82-2.11; P = .254), or cancer stage at diagnosis. Eight surveillance patients (0.46%) and 7 at-need patients (0.41%) reported serious adverse events. CONCLUSIONS Surveillance did not improve overall survival or cancer-specific survival. At-need endoscopy may be a safe alternative for low-risk patients. (ClinicalTrials.gov, Number: NCT00987857.).
Collapse
Affiliation(s)
- Oliver Old
- Gloucestershire Hospitals NHS Foundation Trust, Gloucester, United Kingdom.
| | - Janusz Jankowski
- Institute of Clinical Trials & Methodology, University College London, London, United Kingdom
| | - Stephen Attwood
- The Department of Health Services Research, Durham University, Durham, United Kingdom
| | - Clive Stokes
- Gloucestershire Hospitals NHS Foundation Trust, Gloucester, United Kingdom
| | - Catherine Kendall
- Gloucestershire Hospitals NHS Foundation Trust, Gloucester, United Kingdom
| | - Cathryn Rasdell
- Gloucestershire Hospitals NHS Foundation Trust, Gloucester, United Kingdom
| | - Alex Zimmermann
- Centre for Statistics in Medicine, University of Oxford, Botnar Research Centre, Oxford, United Kingdom
| | - M Sofia Massa
- Centre for Statistics in Medicine, University of Oxford, Botnar Research Centre, Oxford, United Kingdom
| | - Sharon Love
- Centre for Statistics in Medicine, University of Oxford, Botnar Research Centre, Oxford, United Kingdom
| | - Scott Sanders
- University Hospitals Coventry and Warwickshire, University of Warwick, Warwickshire, United Kingdom
| | - Manuela Deidda
- University of Glasgow, Health Economics and Health Technology Assessment, Institute of Health & Wellbeing, Glasgow, United Kingdom
| | - Andrew Briggs
- London School of Hygiene and Tropical Medicine, London, United Kingdom
| | - Julie Hapeshi
- Gloucestershire Hospitals NHS Foundation Trust, Gloucester, United Kingdom
| | - Chris Foy
- Gloucestershire Hospitals NHS Foundation Trust, Gloucester, United Kingdom
| | - Paul Moayyedi
- Division of Gastroenterology, McMaster University Medical Centre, Hamilton, Ontario, Canada
| | - Hugh Barr
- Gloucestershire Hospitals NHS Foundation Trust, Gloucester, United Kingdom
| |
Collapse
|
3
|
Bereuter JP, Geissler ME, Klimova A, Steiner RP, Pfeiffer K, Kolbinger FR, Wiest IC, Muti HS, Kather JN. Benchmarking Vision Capabilities of Large Language Models in Surgical Examination Questions. JOURNAL OF SURGICAL EDUCATION 2025; 82:103442. [PMID: 39923296 DOI: 10.1016/j.jsurg.2025.103442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2024] [Revised: 11/11/2024] [Accepted: 01/20/2025] [Indexed: 02/11/2025]
Abstract
OBJECTIVE Recent studies investigated the potential of large language models (LLMs) for clinical decision making and answering exam questions based on text input. Recent developments of LLMs have extended these models with vision capabilities. These image processing LLMs are called vision-language models (VLMs). However, there is limited investigation on the applicability of VLMs and their capabilities of answering exam questions with image content. Therefore, the aim of this study was to examine the performance of publicly accessible LLMs in 2 different surgical question sets consisting of text and image questions. DESIGN Original text and image exam questions from 2 different surgical question subsets from the German Medical Licensing Examination (GMLE) and United States Medical Licensing Examination (USMLE) were collected and answered by publicly available LLMs (GPT-4, Claude-3 Sonnet, Gemini-1.5). LLM outputs were benchmarked for their accuracy in answering text and image questions. Additionally, the LLMs' performance was compared to students' performance based on their average historical performance (AHP) in these exams. Moreover, variations of LLM performance were analyzed in relation to question difficulty and respective image type. RESULTS Overall, all LLMs achieved scores equivalent to passing grades (≥60%) on surgical text questions across both datasets. On image-based questions, only GPT-4 exceeded the score required to pass, significantly outperforming Claude-3 and Gemini-1.5 (GPT: 78% vs. Claude-3: 58% vs. Gemini-1.5: 57.3%; p < 0.001). Additionally, GPT-4 outperformed students on both text (GPT: 83.7% vs. AHP students: 67.8%; p < 0.001) and image questions (GPT: 78% vs. AHP students: 67.4%; p < 0.001). CONCLUSION GPT-4 demonstrated substantial capabilities in answering surgical text and image exam questions. Therefore, it holds considerable potential for the use in surgical decision making and education of students and trainee surgeons.
Collapse
Affiliation(s)
- Jean-Paul Bereuter
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany.
| | - Mark Enrik Geissler
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany; Else Kroener Fresenius Center for Digital Health, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Anna Klimova
- Institute for Medical Informatics and Biometry, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Robert-Patrick Steiner
- Institute of Pharmacology and Toxicology, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Kevin Pfeiffer
- Else Kroener Fresenius Center for Digital Health, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Fiona R Kolbinger
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany; Weldon School of Biomedical Engineering, Purdue University, West Lafayette, Indiana
| | - Isabella C Wiest
- Else Kroener Fresenius Center for Digital Health, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany; Department of Medicine II, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Hannah Sophie Muti
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany; Else Kroener Fresenius Center for Digital Health, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany; Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany; Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany; Department of Medicine I, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| |
Collapse
|
4
|
Li B, Du YY, Tan WM, He DL, Qi ZP, Yu HH, Shi Q, Ren Z, Cai MY, Yan B, Cai SL, Zhong YS. Effect of computer aided detection system on esophageal neoplasm diagnosis in varied levels of endoscopists. NPJ Digit Med 2025; 8:160. [PMID: 40082585 PMCID: PMC11906877 DOI: 10.1038/s41746-025-01532-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Accepted: 02/19/2025] [Indexed: 03/16/2025] Open
Abstract
A computer-aided detection (CAD) system for early esophagus carcinoma identification during endoscopy with narrow-band imaging (NBI) was evaluated in a large-scale, prospective, tandem, randomized controlled trial to assess its effectiveness. The study was registered at the Chinese Clinical Trial Registry (ChiCTR2100050654, 2021/09/01). Involving 3400 patients were randomly assigned to either routine (routine-first) or CAD-assisted (CAD-first) NBI endoscopy, followed by the other procedure, with targeted biopsies taken at the end of the second examination. The primary outcome was the diagnosis of 1 or more neoplastic lesion of esophagus during the first examination. The CAD-first group demonstrated a significantly higher neoplastic lesion detection rate (3.12%) compared to the routine-first group (1.59%) with a relative detection ratio of 1.96 (P = 0.0047). Subgroup analysis revealed a higher detection rate in junior endoscopists using CAD-first, while no significant difference was observed for senior endoscopists. The CAD system significantly improved esophageal neoplasm detection, particularly benefiting junior endoscopists.
Collapse
Affiliation(s)
- Bing Li
- Endoscopy Center, Zhongshan Hospital of Fudan University, Shanghai, China
| | - Yan-Yun Du
- Endoscopy Center, Zhongshan Hospital of Fudan University, Shanghai, China
| | - Wei-Min Tan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Dong-Li He
- Endoscopy Center, Xuhui Hospital, Zhongshan Hospital of Fudan University, Shanghai, China
| | - Zhi-Peng Qi
- Endoscopy Center, Zhongshan Hospital of Fudan University, Shanghai, China
| | - Hon-Ho Yu
- Department of Gastroenterology, Kiang Wu Hospital, Macau SAR, China
| | - Qiang Shi
- Endoscopy Center, Zhongshan Hospital of Fudan University, Shanghai, China
| | - Zhong Ren
- Endoscopy Center, Zhongshan Hospital of Fudan University, Shanghai, China
| | - Ming-Yan Cai
- Endoscopy Center, Zhongshan Hospital of Fudan University, Shanghai, China
| | - Bo Yan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China.
| | - Shi-Lun Cai
- Endoscopy Center, Zhongshan Hospital of Fudan University, Shanghai, China.
- Endoscopy Center, Xuhui Hospital, Zhongshan Hospital of Fudan University, Shanghai, China.
| | - Yun-Shi Zhong
- Endoscopy Center, Zhongshan Hospital of Fudan University, Shanghai, China.
- Endoscopy Center, Xuhui Hospital, Zhongshan Hospital of Fudan University, Shanghai, China.
- Endoscopy Center, Shanghai Geriatric Medical Center, Shanghai, China.
| |
Collapse
|
5
|
Kafetzis I, Sodmann P, Herghelegiu B, Brand M, Zoller WG, Seyfried F, Fuchs K, Meining A, Hann A. Prospective Evaluation of Real-Time Artificial Intelligence for the Hill Classification of the Gastroesophageal Junction. United European Gastroenterol J 2025; 13:240-246. [PMID: 39668544 PMCID: PMC11975621 DOI: 10.1002/ueg2.12721] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Revised: 09/30/2024] [Accepted: 10/12/2024] [Indexed: 12/14/2024] Open
Abstract
BACKGROUND Assessment of the gastroesophageal junction (GEJ) is an integral part of gastroscopy; however, the absence of standardized reporting hinders consistency of examination documentation. The Hill classification offers a standardized approach for evaluating the GEJ. This study aims to compare the accuracy of an artificial intelligence (AI) system with that of physicians in classifying the GEJ according to Hill in a prospective, blinded, superiority trial. METHODS Consecutive patients scheduled for gastroscopy with an intact GEJ were recruited during clinical routine from October 2023 to December 2023. Nine physicians (six experienced, three inexperienced) assessed the Hill grade, and the AI system operated in the background in real-time. The gold standard was determined by a majority vote of independent assessments by three expert endoscopists who did not participate in the study. The primary outcome was accuracy. Secondary outcomes were per-Hill grade analysis and result comparison for experienced and inexperienced endoscopists separately. RESULTS In 131 analysed examinations the AI's accuracy of 84.7% (95% CI: 78.6-90.8) was significantly higher than 62.5% (95% CI: 54.2-71) of physicians (p < 0.01). The AI outperformed physicians in all but one cases in the per-Hill-class analysis. AI was significantly more accurate than inexperienced physicians (85% vs. 56%, p < 0.01) and in trend better than experienced physicians (84% vs. 69.6%, p = 0.07). CONCLUSIONS AI was significantly more accurate than examiners in assessing the Hill classification. This superior model performance can prove beneficial for endoscopists, especially those with limited experience. TRIAL REGISTRATION ClinicalTrials.gov identifier: NCT06040723.
Collapse
Affiliation(s)
- Ioannis Kafetzis
- Interventional and Experimental Endoscopy (InExEn)Department of Internal Medicine 2University Hospital WürzburgWürzburgGermany
| | - Philipp Sodmann
- Interventional and Experimental Endoscopy (InExEn)Department of Internal Medicine 2University Hospital WürzburgWürzburgGermany
| | - Bianca‐Elena Herghelegiu
- Interventional and Experimental Endoscopy (InExEn)Department of Internal Medicine 2University Hospital WürzburgWürzburgGermany
| | - Markus Brand
- Interventional and Experimental Endoscopy (InExEn)Department of Internal Medicine 2University Hospital WürzburgWürzburgGermany
| | - Wolfram G. Zoller
- Department of Internal Medicine and GastroenterologyKatharinenhospitalStuttgartGermany
| | - Florian Seyfried
- Department of General, Visceral, Transplantation, Vascular, and Pediatric SurgeryCenter of Operative Medicine (ZOM)University Hospital WürzburgWürzburgGermany
| | - Karl‐Hermann Fuchs
- Interventional and Experimental Endoscopy (InExEn)Department of Internal Medicine 2University Hospital WürzburgWürzburgGermany
| | - Alexander Meining
- Interventional and Experimental Endoscopy (InExEn)Department of Internal Medicine 2University Hospital WürzburgWürzburgGermany
| | - Alexander Hann
- Interventional and Experimental Endoscopy (InExEn)Department of Internal Medicine 2University Hospital WürzburgWürzburgGermany
| |
Collapse
|
6
|
Zhang J, Liu R, Hao D, Tian G, Zhang S, Zhang S, Zang Y, Pang K, Hu X, Ren K, Cui M, Liu S, Wu J, Wang Q, Feng B, Tong W, Yang Y, Wang G, Lu Y. ResNet-Vision Transformer based MRI-endoscopy fusion model for predicting treatment response to neoadjuvant chemoradiotherapy in locally advanced rectal cancer: A multicenter study. Chin Med J (Engl) 2024. [DOI: 10.1097/cm9.0000000000003391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Indexed: 01/20/2025] Open
Abstract
Abstract
Background:
Neoadjuvant chemoradiotherapy followed by radical surgery has been a common practice for patients with locally advanced rectal cancer, but the response rate varies among patients. This study aimed to develop a ResNet-Vision Transformer based magnetic resonance imaging (MRI)-endoscopy fusion model to precisely predict treatment response and provide personalized treatment.
Methods:
In this multicenter study, 366 eligible patients who had undergone neoadjuvant chemoradiotherapy followed by radical surgery at eight Chinese tertiary hospitals between January 2017 and June 2024 were recruited, with 2928 pretreatment colonic endoscopic images and 366 pelvic MRI images. An MRI-endoscopy fusion model was constructed based on the ResNet backbone and Transformer network using pretreatment MRI and endoscopic images. Treatment response was defined as good response or non-good response based on the tumor regression grade. The Delong test and the Hanley–McNeil test were utilized to compare prediction performance among different models and different subgroups, respectively. The predictive performance of the MRI-endoscopy fusion model was comprehensively validated in the test sets and was further compared to that of the single-modal MRI model and single-modal endoscopy model.
Results:
The MRI-endoscopy fusion model demonstrated favorable prediction performance. In the internal validation set, the area under the curve (AUC) and accuracy were 0.852 (95% confidence interval [CI]: 0.744–0.940) and 0.737 (95% CI: 0.712–0.844), respectively. Moreover, the AUC and accuracy reached 0.769 (95% CI: 0.678–0.861) and 0.729 (95% CI: 0.628–0.821), respectively, in the external test set. In addition, the MRI-endoscopy fusion model outperformed the single-modal MRI model (AUC: 0.692 [95% CI: 0.609–0.783], accuracy: 0.659 [95% CI: 0.565–0.775]) and the single-modal endoscopy model (AUC: 0.720 [95% CI: 0.617–0.823], accuracy: 0.713 [95% CI: 0.612–0.809]) in the external test set.
Conclusion:
The MRI-endoscopy fusion model based on ResNet-Vision Transformer achieved favorable performance in predicting treatment response to neoadjuvant chemoradiotherapy and holds tremendous potential for enabling personalized treatment regimens for locally advanced rectal cancer patients.
Collapse
Affiliation(s)
- Junhao Zhang
- Department of Gastrointestinal Surgery, The Affiliated Hospital of Qingdao University, Qingdao, Shandong 266000, China
| | - Ruiqing Liu
- Department of Gastrointestinal Surgery, The Affiliated Hospital of Qingdao University, Qingdao, Shandong 266000, China
| | - Di Hao
- School of Control Science and Engineering, Shandong University, Jinan, Shandong 250061, China
| | - Guangye Tian
- School of Control Science and Engineering, Shandong University, Jinan, Shandong 250061, China
| | - Shiwei Zhang
- Department of Gastric and Colorectal Surgery, General Surgery Center, The First Hospital of Jilin University, Changchun, Jilin 130021, China
| | - Sen Zhang
- Department of General Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 20025, China
| | - Yitong Zang
- Department of General Surgery, Colorectal Division, Army Medical Center, Army Medical University, Chongqing 400038, China
| | - Kai Pang
- Department of General Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Xuhua Hu
- The Second Department of General Surgery, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei 050001, China
| | - Keyu Ren
- Department of Gastroenterology, The Affiliated Hospital of Qingdao University, Qingdao, Shandong 266000, China
| | - Mingjuan Cui
- Department of Gastroenterology, The Affiliated Hospital of Qingdao University, Qingdao, Shandong 266000, China
| | - Shuhao Liu
- Department of Colorectal and Anal Surgery, The First Affiliated Hospital of Shandong Second Medical University, Weifang, Shandong 261000, China
| | - Jinhui Wu
- Department of Gastrointestinal Surgery Ward II, Yantai Yuhuangding Hospital, Yantai, Shandong 264009, China
| | - Quan Wang
- Department of Gastric and Colorectal Surgery, General Surgery Center, The First Hospital of Jilin University, Changchun, Jilin 130021, China
| | - Bo Feng
- Department of General Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 20025, China
| | - Weidong Tong
- Department of General Surgery, Colorectal Division, Army Medical Center, Army Medical University, Chongqing 400038, China
| | - Yingchi Yang
- Department of General Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Guiying Wang
- The Second Department of General Surgery, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei 050001, China
- Department of General Surgery, The Second Hospital of Hebei Medical University, Shijiazhuang, Hebei 050000, China
| | - Yun Lu
- Department of Gastrointestinal Surgery, The Affiliated Hospital of Qingdao University, Qingdao, Shandong 266000, China
| |
Collapse
|
7
|
Wang YK, Karmakar R, Mukundan A, Men TC, Tsao YM, Lu SC, Wu IC, Wang HC. Computer-aided endoscopic diagnostic system modified with hyperspectral imaging for the classification of esophageal neoplasms. Front Oncol 2024; 14:1423405. [PMID: 39687890 PMCID: PMC11646837 DOI: 10.3389/fonc.2024.1423405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2024] [Accepted: 11/04/2024] [Indexed: 12/18/2024] Open
Abstract
INTRODUCTION The early detection of esophageal cancer is crucial to enhancing patient survival rates, and endoscopy remains the gold standard for identifying esophageal neoplasms. Despite this fact, accurately diagnosing superficial esophageal neoplasms poses a challenge, even for seasoned endoscopists. Recent advancements in computer-aided diagnostic systems, empowered by artificial intelligence (AI), have shown promising results in elevating the diagnostic precision for early-stage esophageal cancer. METHODS In this study, we expanded upon traditional red-green-blue (RGB) imaging by integrating the YOLO neural network algorithm with hyperspectral imaging (HSI) to evaluate the diagnostic efficacy of this innovative AI system for superficial esophageal neoplasms. A total of 1836 endoscopic images were utilized for model training, which included 858 white-light imaging (WLI) and 978 narrow-band imaging (NBI) samples. These images were categorized into three groups, namely, normal esophagus, esophageal squamous dysplasia, and esophageal squamous cell carcinoma (SCC). RESULTS An additional set comprising 257 WLI and 267 NBI images served as the validation dataset to assess diagnostic accuracy. Within the RGB dataset, the diagnostic accuracies of the WLI and NBI systems for classifying images into normal, dysplasia, and SCC categories were 0.83 and 0.82, respectively. Conversely, the HSI dataset yielded higher diagnostic accuracies for the WLI and NBI systems, with scores of 0.90 and 0.89, respectively. CONCLUSION The HSI dataset outperformed the RGB dataset, demonstrating an overall diagnostic accuracy improvement of 8%. Our findings underscored the advantageous impact of incorporating the HSI dataset in model training. Furthermore, the application of HSI in AI-driven image recognition algorithms significantly enhanced the diagnostic accuracy for early esophageal cancer.
Collapse
Affiliation(s)
- Yao-Kuang Wang
- Graduate Institute of Clinical Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
- Division of Gastroenterology, Department of Internal Medicine, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung, Taiwan
- Department of Medicine, Faculty of Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Riya Karmakar
- Department of Mechanical Engineering, National Chung Cheng University, Chiayi, Taiwan
| | - Arvind Mukundan
- Department of Mechanical Engineering, National Chung Cheng University, Chiayi, Taiwan
| | - Ting-Chun Men
- Department of Mechanical Engineering, National Chung Cheng University, Chiayi, Taiwan
| | - Yu-Ming Tsao
- Department of Mechanical Engineering, National Chung Cheng University, Chiayi, Taiwan
| | - Song-Cun Lu
- Department of Mechanical Engineering, National Chung Cheng University, Chiayi, Taiwan
| | - I-Chen Wu
- Division of Gastroenterology, Department of Internal Medicine, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung, Taiwan
- Department of Medicine, Faculty of Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Hsiang-Chen Wang
- Department of Mechanical Engineering, National Chung Cheng University, Chiayi, Taiwan
- Department of Medical Research, Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Chiayi, Taiwan
- Technology Development, Hitspectra Intelligent Technology Co., Ltd., Kaohsiung, Taiwan
| |
Collapse
|
8
|
Huang L, Xu M, Li Y, Dong Z, Lin J, Wang W, Wu L, Yu H. Gastric neoplasm detection of computer-aided detection-assisted esophagogastroduodenoscopy changes with implement scenarios: a real-world study. J Gastroenterol Hepatol 2024; 39:2787-2795. [PMID: 39469909 DOI: 10.1111/jgh.16784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 09/27/2024] [Accepted: 10/10/2024] [Indexed: 10/30/2024]
Abstract
BACKGROUND AND AIM The implementation of computer-aided detection (CAD) devices in esophagogastroduodenoscopy (EGD) could autonomously identify gastric precancerous lesions and neoplasms and reduce the miss rate of gastric neoplasms in prospective trials. However, there is still insufficient evidence of their use in real-life clinical practice. METHODS A real-world, two-center study was conducted at Wenzhou Central Hospital (WCH) and Renmin Hospital of Wuhan University (RHWU). High biopsy rate and low biopsy rate strategies were adopted, and CAD devices were applied in 2019 and 2021 at WCH and RHWU, respectively. We compared differences in gastric precancerous and neoplasm detection of EGD before and after the use of CAD devices in the first half of the year. RESULTS A total of 33 885 patients were included and 32 886 patients were ultimately analyzed. In WCH of which biopsy rate >95%, with the implementation of CAD, more the number of early gastric cancer divided by all gastric neoplasm (EGC/GN) (0.35% vs 0.59%, P = 0.028, OR [95% CI] = 1.65 [1.0-2.60]) was found, while gastric neoplasm detection rate (1.39% vs 1.36%, P = 0.897, OR [95% CI] = 0.98 [0.76-1.26]) remained stable. In RHWU of which biopsy rate <20%, the gastric neoplasm detection rate (1.78% vs 3.23%, P < 0.001, OR [95% CI] = 1.84 [1.33-2.54]) nearly doubled after the implementation of CAD, while there was no significant change in the EGC/GN. CONCLUSION The application of CAD devices devoted to distinct increases in gastric neoplasm detection according to different biopsy strategies, which implied that CAD devices demonstrated assistance on gastric neoplasm detection while varied effectiveness according to different implementation scenarios.
Collapse
Affiliation(s)
- Li Huang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial Intelligence Endoscopy Interventional Treatment of Hubei Province, Renmin Hospital of Wuhan University, Wuhan, China
| | - Ming Xu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial Intelligence Endoscopy Interventional Treatment of Hubei Province, Renmin Hospital of Wuhan University, Wuhan, China
| | - Yanxia Li
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial Intelligence Endoscopy Interventional Treatment of Hubei Province, Renmin Hospital of Wuhan University, Wuhan, China
| | - Zehua Dong
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial Intelligence Endoscopy Interventional Treatment of Hubei Province, Renmin Hospital of Wuhan University, Wuhan, China
| | - Jiejun Lin
- Department of Gastroenterology, Wenzhou Sixth People's Hospital, Wenzhou Central Hospital Medical Group, Wenzhou, China
| | - Wen Wang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial Intelligence Endoscopy Interventional Treatment of Hubei Province, Renmin Hospital of Wuhan University, Wuhan, China
| | - Lianlian Wu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial Intelligence Endoscopy Interventional Treatment of Hubei Province, Renmin Hospital of Wuhan University, Wuhan, China
| | - Honggang Yu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
- Engineering Research Center for Artificial Intelligence Endoscopy Interventional Treatment of Hubei Province, Renmin Hospital of Wuhan University, Wuhan, China
| |
Collapse
|
9
|
Li W, Shao M, Hu S, Xie S, He B. The diagnostic value of endoscopic ultrasound for esophageal subepithelial lesions: A review. Medicine (Baltimore) 2024; 103:e40419. [PMID: 39560558 PMCID: PMC11576025 DOI: 10.1097/md.0000000000040419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Accepted: 10/18/2024] [Indexed: 11/20/2024] Open
Abstract
Esophageal subepithelial lesions (ESELs) encompass a variety of diseases, including leiomyoma, granular cell tumors, hemangioma, lipoma, stromal tumors, leiomyosarcoma, schwannoma, neuroendocrine tumors and more. These lesions often present asymptomatically, leading to a generally low clinical diagnosis rate. Common imaging techniques for diagnosing ESELs include conventional endoscopy, spiral computed tomography, and endoscopic ultrasound (EUS). Among these, EUS is currently regarded as one of the most accurate methods for diagnosing ESELs. In recent years, EUS has increasingly been combined with advanced technologies such as artificial intelligence, submucosal saline injection, high-frequency impedance measurement, and enhanced imaging to improve diagnostic accuracy and reduce missed diagnoses. This article reviews the application and recent advancements of EUS in diagnosing esophageal submucosal lesions.
Collapse
Affiliation(s)
- Wanwen Li
- Department of Thoracic Surgery, Sichuan Provincial People’s Hospital, School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Mengqi Shao
- Department of Thoracic Surgery, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Shichen Hu
- Department of Thoracic Surgery, Sichuan Provincial People’s Hospital, School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Shenglong Xie
- Department of Thoracic Surgery, Sichuan Provincial People’s Hospital, School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Bin He
- Department of Thoracic Surgery, Sichuan Provincial People’s Hospital, School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
10
|
Tao Y, Luo Y, Hu H, Wang W, Zhao Y, Wang S, Zheng Q, Zhang T, Zhang G, Li J, Ni M. Clinically applicable optimized periprosthetic joint infection diagnosis via AI based pathology. NPJ Digit Med 2024; 7:303. [PMID: 39462052 PMCID: PMC11513062 DOI: 10.1038/s41746-024-01301-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 10/16/2024] [Indexed: 10/28/2024] Open
Abstract
Periprosthetic joint infection (PJI) is a severe complication after joint replacement surgery that demands precise diagnosis for effective treatment. We enhanced PJI diagnostic accuracy through three steps: (1) developing a self-supervised PJI model with DINO v2 to create a large dataset; (2) comparing multiple intelligent models to identify the best one; and (3) using the optimal model for visual analysis to refine diagnostic practices. The self-supervised model generated 27,724 training samples and achieved a perfect AUC of 1, indicating flawless case differentiation. EfficientNet v2-S outperformed CAMEL2 at the image level, while CAMEL2 was superior at the patient level. By using the weakly supervised PJI model to adjust diagnostic criteria, we reduced the required high-power field diagnoses per slide from five to three. These findings demonstrate AI's potential to improve the accuracy and standardization of PJI pathology and have significant implications for infectious disease diagnostics.
Collapse
Affiliation(s)
- Ye Tao
- Orthopedics Department, Fourth Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Yazhi Luo
- Department of computation, information and technology, Technical University of Munich, Munich, Germany
| | - Hanwen Hu
- Orthopedics Department, Fourth Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Wei Wang
- Thorough Lab, Thorough Future, Beijing, China
| | - Ying Zhao
- Thorough Lab, Thorough Future, Beijing, China
| | - Shuhao Wang
- Thorough Lab, Thorough Future, Beijing, China
| | - Qingyuan Zheng
- Orthopedics Department, Fourth Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Tianwei Zhang
- Orthopedics Department, Fourth Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Guoqiang Zhang
- Orthopedics Department, Fourth Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Jie Li
- Department of Pathology, First Medical Center, Chinese PLA General Hospital, Beijing, China.
| | - Ming Ni
- Orthopedics Department, Fourth Medical Center, Chinese PLA General Hospital, Beijing, China.
| |
Collapse
|
11
|
Theocharopoulos C, Davakis S, Ziogas DC, Theocharopoulos A, Foteinou D, Mylonakis A, Katsaros I, Gogas H, Charalabopoulos A. Deep Learning for Image Analysis in the Diagnosis and Management of Esophageal Cancer. Cancers (Basel) 2024; 16:3285. [PMID: 39409906 PMCID: PMC11475041 DOI: 10.3390/cancers16193285] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2024] [Revised: 09/21/2024] [Accepted: 09/24/2024] [Indexed: 10/20/2024] Open
Abstract
Esophageal cancer has a dismal prognosis and necessitates a multimodal and multidisciplinary approach from diagnosis to treatment. High-definition white-light endoscopy and histopathological confirmation remain the gold standard for the definitive diagnosis of premalignant and malignant lesions. Artificial intelligence using deep learning (DL) methods for image analysis constitutes a promising adjunct for the clinical endoscopist that could effectively decrease BE overdiagnosis and unnecessary surveillance, while also assisting in the timely detection of dysplastic BE and esophageal cancer. A plethora of studies published during the last five years have consistently reported highly accurate DL algorithms with comparable or superior performance compared to endoscopists. Recent efforts aim to expand DL utilization into further aspects of esophageal neoplasia management including histologic diagnosis, segmentation of gross tumor volume, pretreatment prediction and post-treatment evaluation of patient response to systemic therapy and operative guidance during minimally invasive esophagectomy. Our manuscript serves as an introduction to the growing literature of DL applications for image analysis in the management of esophageal neoplasia, concisely presenting all currently published studies. We also aim to guide the clinician across basic functional principles, evaluation metrics and limitations of DL for image recognition to facilitate the comprehension and critical evaluation of the presented studies.
Collapse
Affiliation(s)
| | - Spyridon Davakis
- First Department of Surgery, School of Medicine, Laiko General Hospital, National and Kapodistrian University of Athens, 11527 Athens, Greece; (S.D.); (A.M.); (I.K.); (A.C.)
| | - Dimitrios C. Ziogas
- First Department of Medicine, School of Medicine, Laiko General Hospital, National and Kapodistrian University of Athens, 11527 Athens, Greece; (D.C.Z.); (D.F.); (H.G.)
| | - Achilleas Theocharopoulos
- Department of Electrical and Computer Engineering, National Technical University of Athens, 10682 Athens, Greece;
| | - Dimitra Foteinou
- First Department of Medicine, School of Medicine, Laiko General Hospital, National and Kapodistrian University of Athens, 11527 Athens, Greece; (D.C.Z.); (D.F.); (H.G.)
| | - Adam Mylonakis
- First Department of Surgery, School of Medicine, Laiko General Hospital, National and Kapodistrian University of Athens, 11527 Athens, Greece; (S.D.); (A.M.); (I.K.); (A.C.)
| | - Ioannis Katsaros
- First Department of Surgery, School of Medicine, Laiko General Hospital, National and Kapodistrian University of Athens, 11527 Athens, Greece; (S.D.); (A.M.); (I.K.); (A.C.)
| | - Helen Gogas
- First Department of Medicine, School of Medicine, Laiko General Hospital, National and Kapodistrian University of Athens, 11527 Athens, Greece; (D.C.Z.); (D.F.); (H.G.)
| | - Alexandros Charalabopoulos
- First Department of Surgery, School of Medicine, Laiko General Hospital, National and Kapodistrian University of Athens, 11527 Athens, Greece; (S.D.); (A.M.); (I.K.); (A.C.)
| |
Collapse
|
12
|
Mubarak M, Rashid R, Sapna F, Shakeel S. Expanding role and scope of artificial intelligence in the field of gastrointestinal pathology. Artif Intell Gastroenterol 2024; 5:91550. [DOI: 10.35712/aig.v5.i2.91550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 07/06/2024] [Accepted: 07/29/2024] [Indexed: 08/08/2024] Open
Abstract
Digital pathology (DP) and its subsidiaries including artificial intelligence (AI) are rapidly making inroads into the area of diagnostic anatomic pathology (AP) including gastrointestinal (GI) pathology. It is poised to revolutionize the field of diagnostic AP. Historically, AP has been slow to adopt digital technology, but this is changing rapidly, with many centers worldwide transitioning to DP. Coupled with advanced techniques of AI such as deep learning and machine learning, DP is likely to transform histopathology from a subjective field to an objective, efficient, and transparent discipline. AI is increasingly integrated into GI pathology, offering numerous advancements and improvements in overall diagnostic accuracy, efficiency, and patient care. Specifically, AI in GI pathology enhances diagnostic accuracy, streamlines workflows, provides predictive insights, integrates multimodal data, supports research, and aids in education and training, ultimately improving patient care and outcomes. This review summarized the latest developments in the role and scope of AI in AP with a focus on GI pathology. The main aim was to provide updates and create awareness among the pathology community.
Collapse
Affiliation(s)
- Muhammed Mubarak
- Department of Histopathology, Sindh Institute of Urology and Transplantation, Karachi 74200, Sindh, Pakistan
| | - Rahma Rashid
- Department of Histopathology, Sindh Institute of Urology and Transplantation, Karachi 74200, Sindh, Pakistan
| | - Fnu Sapna
- Department of Pathology, Montefiore Medical Center, The University Hospital for Albert Einstein School of Medicine, Bronx, NY 10461, United States
| | - Shaheera Shakeel
- Department of Histopathology, Sindh Institute of Urology and Transplantation, Karachi 74200, Sindh, Pakistan
| |
Collapse
|
13
|
Iwai T, Kida M, Okuwaki K, Watanabe M, Adachi K, Ishizaki J, Hanaoka T, Tamaki A, Tadehara M, Imaizumi H, Kusano C. Deep learning analysis for differential diagnosis and risk classification of gastrointestinal tumors. Scand J Gastroenterol 2024; 59:925-932. [PMID: 38950889 DOI: 10.1080/00365521.2024.2368241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Revised: 05/24/2024] [Accepted: 06/08/2024] [Indexed: 07/03/2024]
Abstract
OBJECTIVES Recently, artificial intelligence (AI) has been applied to clinical diagnosis. Although AI has already been developed for gastrointestinal (GI) tract endoscopy, few studies have applied AI to endoscopic ultrasound (EUS) images. In this study, we used a computer-assisted diagnosis (CAD) system with deep learning analysis of EUS images (EUS-CAD) and assessed its ability to differentiate GI stromal tumors (GISTs) from other mesenchymal tumors and their risk classification performance. MATERIALS AND METHODS A total of 101 pathologically confirmed cases of subepithelial lesions (SELs) arising from the muscularis propria layer, including 69 GISTs, 17 leiomyomas and 15 schwannomas, were examined. A total of 3283 EUS images were used for training and five-fold-cross-validation, and 827 images were independently tested for diagnosing GISTs. For the risk classification of 69 GISTs, including very-low-, low-, intermediate- and high-risk GISTs, 2,784 EUS images were used for training and three-fold-cross-validation. RESULTS For the differential diagnostic performance of GIST among all SELs, the accuracy, sensitivity, specificity and area under the receiver operating characteristic (ROC) curve were 80.4%, 82.9%, 75.3% and 0.865, respectively, whereas those for intermediate- and high-risk GISTs were 71.8%, 70.2%, 72.0% and 0.771, respectively. CONCLUSIONS The EUS-CAD system showed a good diagnostic yield in differentiating GISTs from other mesenchymal tumors and successfully demonstrated the GIST risk classification feasibility. This system can determine whether treatment is necessary based on EUS imaging alone without the need for additional invasive examinations.
Collapse
Affiliation(s)
- Tomohisa Iwai
- Department of Gastroenterology, Kitasato University School of Medicine, Sagamihara, Japan
| | - Mitsuhiro Kida
- Department of Gastroenterology, Kitasato University School of Medicine, Sagamihara, Japan
| | - Kosuke Okuwaki
- Department of Gastroenterology, Kitasato University School of Medicine, Sagamihara, Japan
| | - Masafumi Watanabe
- Department of Gastroenterology, Kitasato University School of Medicine, Sagamihara, Japan
| | - Kai Adachi
- Department of Gastroenterology, Kitasato University School of Medicine, Sagamihara, Japan
| | - Junro Ishizaki
- Department of Gastroenterology, Kitasato University School of Medicine, Sagamihara, Japan
| | - Taro Hanaoka
- Department of Gastroenterology, Kitasato University School of Medicine, Sagamihara, Japan
| | - Akihiro Tamaki
- Department of Gastroenterology, Kitasato University School of Medicine, Sagamihara, Japan
| | - Masayoshi Tadehara
- Department of Gastroenterology, Kitasato University School of Medicine, Sagamihara, Japan
| | - Hiroshi Imaizumi
- Department of Gastroenterology, Kitasato University School of Medicine, Sagamihara, Japan
| | - Chika Kusano
- Department of Gastroenterology, Kitasato University School of Medicine, Sagamihara, Japan
| |
Collapse
|
14
|
Tao X, Zhu Y, Dong Z, Huang L, Shang R, Du H, Wang J, Zeng X, Wang W, Wang J, Li Y, Deng Y, Wu L, Yu H. An artificial intelligence system for chronic atrophic gastritis diagnosis and risk stratification under white light endoscopy. Dig Liver Dis 2024; 56:1319-1326. [PMID: 38246825 DOI: 10.1016/j.dld.2024.01.177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 11/06/2023] [Accepted: 01/05/2024] [Indexed: 01/23/2024]
Abstract
BACKGROUND AND AIMS The diagnosis and stratification of gastric atrophy (GA) predict patients' gastric cancer progression risk and determine endoscopy surveillance interval. We aimed to construct an artificial intelligence (AI) system for GA endoscopic identification and risk stratification based on the Kimura-Takemoto classification. METHODS We constructed the system using two trained models and verified its performance. First, we retrospectively collected 869 images and 119 videos to compare its performance with that of endoscopists in identifying GA. Then, we included original image cases of 102 patients to validate the system for stratifying GA and comparing it with endoscopists with different experiences. RESULTS The sensitivity of model 1 was higher than that of endoscopists (92.72% vs. 76.85 %) at image level and also higher than that of experts (94.87% vs. 85.90 %) at video level. The system outperformed experts in stratifying GA (overall accuracy: 81.37 %, 73.04 %, p = 0.045). The accuracy of this system in classifying non-GA, mild GA, moderate GA, and severe GA was 80.00 %, 77.42 %, 83.33 %, and 85.71 %, comparable to that of experts and better than that of seniors and novices. CONCLUSIONS We established an expert-level system for GA endoscopic identification and risk stratification. It has great potential for endoscopic assessment and surveillance determinations.
Collapse
Affiliation(s)
- Xiao Tao
- Renmin Hospital of Wuhan University, Wuhan, PR China; Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, PR China; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, PR China
| | - Yijie Zhu
- Renmin Hospital of Wuhan University, Wuhan, PR China; Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, PR China; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, PR China; Department of Gastroenterology, Yunnan Digestive Endoscopy Clinical Medical Center, The First People's Hospital of Yunnan Province, Kunming, 650032, PR China
| | - Zehua Dong
- Renmin Hospital of Wuhan University, Wuhan, PR China; Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, PR China; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, PR China
| | - Li Huang
- Renmin Hospital of Wuhan University, Wuhan, PR China; Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, PR China; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, PR China
| | - Renduo Shang
- Renmin Hospital of Wuhan University, Wuhan, PR China; Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, PR China; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, PR China
| | - Hongliu Du
- Renmin Hospital of Wuhan University, Wuhan, PR China; Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, PR China; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, PR China
| | - Junxiao Wang
- Renmin Hospital of Wuhan University, Wuhan, PR China; Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, PR China; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, PR China
| | - Xiaoquan Zeng
- Renmin Hospital of Wuhan University, Wuhan, PR China; Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, PR China; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, PR China
| | - Wen Wang
- Renmin Hospital of Wuhan University, Wuhan, PR China; Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, PR China; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, PR China
| | - Jiamin Wang
- Renmin Hospital of Wuhan University, Wuhan, PR China; Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, PR China; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, PR China
| | - Yanxia Li
- Renmin Hospital of Wuhan University, Wuhan, PR China; Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, PR China; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, PR China
| | - Yunchao Deng
- Renmin Hospital of Wuhan University, Wuhan, PR China; Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, PR China; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, PR China
| | - Lianlian Wu
- Renmin Hospital of Wuhan University, Wuhan, PR China; Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, PR China; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, PR China.
| | - Honggang Yu
- Renmin Hospital of Wuhan University, Wuhan, PR China; Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, PR China; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, PR China.
| |
Collapse
|
15
|
Matsubayashi CO, Cheng S, Hulchafo I, Zhang Y, Tada T, Buxbaum JL, Ochiai K. Artificial intelligence for gastric cancer in endoscopy: From diagnostic reasoning to market. Dig Liver Dis 2024; 56:1156-1163. [PMID: 38763796 DOI: 10.1016/j.dld.2024.04.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 04/15/2024] [Accepted: 04/16/2024] [Indexed: 05/21/2024]
Abstract
Recognition of gastric conditions during endoscopy exams, including gastric cancer, usually requires specialized training and a long learning curve. Besides that, the interobserver variability is frequently high due to the different morphological characteristics of the lesions and grades of mucosal inflammation. In this sense, artificial intelligence tools based on deep learning models have been developed to support physicians to detect, classify, and predict gastric lesions more efficiently. Even though a growing number of studies exists in the literature, there are multiple challenges to bring a model to practice in this field, such as the need for more robust validation studies and regulatory hurdles. Therefore, the aim of this review is to provide a comprehensive assessment of the current use of artificial intelligence applied to endoscopic imaging to evaluate gastric precancerous and cancerous lesions and the barriers to widespread implementation of this technology in clinical routine.
Collapse
Affiliation(s)
- Carolina Ogawa Matsubayashi
- Endoscopy Unit, Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, University of São Paulo, São Paulo, Brasil; AI Medical Service Inc., Tokyo, Japan.
| | - Shuyan Cheng
- Department of Population Health Science, Weill Cornell Medical College, New York, NY 10065, USA
| | - Ismael Hulchafo
- Columbia University School of Nursing, New York, NY 10032, USA
| | - Yifan Zhang
- Department of Population Health Science, Weill Cornell Medical College, New York, NY 10065, USA
| | - Tomohiro Tada
- AI Medical Service Inc., Tokyo, Japan; Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - James L Buxbaum
- Division of Gastrointestinal and Liver Diseases, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Kentaro Ochiai
- Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan; Department of Colon and Rectal Surgery, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| |
Collapse
|
16
|
Chang YH, Shin CM, Lee HD, Park J, Jeon J, Cho SJ, Kang SJ, Chung JY, Jun YK, Choi Y, Yoon H, Park YS, Kim N, Lee DH. Real-World Application of Artificial Intelligence for Detecting Pathologic Gastric Atypia and Neoplastic Lesions. J Gastric Cancer 2024; 24:327-340. [PMID: 38960891 PMCID: PMC11224715 DOI: 10.5230/jgc.2024.24.e28] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 06/11/2024] [Accepted: 06/17/2024] [Indexed: 07/05/2024] Open
Abstract
PURPOSE Results of initial endoscopic biopsy of gastric lesions often differ from those of the final pathological diagnosis. We evaluated whether an artificial intelligence-based gastric lesion detection and diagnostic system, ENdoscopy as AI-powered Device Computer Aided Diagnosis for Gastroscopy (ENAD CAD-G), could reduce this discrepancy. MATERIALS AND METHODS We retrospectively collected 24,948 endoscopic images of early gastric cancers (EGCs), dysplasia, and benign lesions from 9,892 patients who underwent esophagogastroduodenoscopy between 2011 and 2021. The diagnostic performance of ENAD CAD-G was evaluated using the following real-world datasets: patients referred from community clinics with initial biopsy results of atypia (n=154), participants who underwent endoscopic resection for neoplasms (Internal video set, n=140), and participants who underwent endoscopy for screening or suspicion of gastric neoplasm referred from community clinics (External video set, n=296). RESULTS ENAD CAD-G classified the referred gastric lesions of atypia into EGC (accuracy, 82.47%; 95% confidence interval [CI], 76.46%-88.47%), dysplasia (88.31%; 83.24%-93.39%), and benign lesions (83.12%; 77.20%-89.03%). In the Internal video set, ENAD CAD-G identified dysplasia and EGC with diagnostic accuracies of 88.57% (95% CI, 83.30%-93.84%) and 91.43% (86.79%-96.07%), respectively, compared with an accuracy of 60.71% (52.62%-68.80%) for the initial biopsy results (P<0.001). In the External video set, ENAD CAD-G classified EGC, dysplasia, and benign lesions with diagnostic accuracies of 87.50% (83.73%-91.27%), 90.54% (87.21%-93.87%), and 88.85% (85.27%-92.44%), respectively. CONCLUSIONS ENAD CAD-G is superior to initial biopsy for the detection and diagnosis of gastric lesions that require endoscopic resection. ENAD CAD-G can assist community endoscopists in identifying gastric lesions that require endoscopic resection.
Collapse
Affiliation(s)
- Young Hoon Chang
- Department of Internal Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Cheol Min Shin
- Department of Internal Medicine, Seoul National University Bundang Hospital, Seongnam, Korea.
| | - Hae Dong Lee
- Department of Internal Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| | | | | | - Soo-Jeong Cho
- Department of Internal Medicine and Liver Research Institute, Seoul National University College of Medicine, Seoul, Korea
| | - Seung Joo Kang
- Department of Internal Medicine and Healthcare Research Institute, Healthcare System Gangnam Center, Seoul National University Hospital, Seoul, Korea
| | - Jae-Yong Chung
- Department of Clinical Pharmacology and Therapeutics, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Yu Kyung Jun
- Department of Internal Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Yonghoon Choi
- Department of Internal Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Hyuk Yoon
- Department of Internal Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Young Soo Park
- Department of Internal Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Nayoung Kim
- Department of Internal Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Dong Ho Lee
- Department of Internal Medicine, Seoul National University Bundang Hospital, Seongnam, Korea
| |
Collapse
|
17
|
Xu C, Liu X, Bao B, Liu C, Li R, Yang T, Wu Y, Zhang Y, Tang J. Two-Stage Deep Learning Model for Diagnosis of Lumbar Spondylolisthesis Based on Lateral X-Ray Images. World Neurosurg 2024; 186:e652-e661. [PMID: 38608811 DOI: 10.1016/j.wneu.2024.04.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Accepted: 04/04/2024] [Indexed: 04/14/2024]
Abstract
BACKGROUND Diagnosing early lumbar spondylolisthesis is challenging for many doctors because of the lack of obvious symptoms. Using deep learning (DL) models to improve the accuracy of X-ray diagnoses can effectively reduce missed and misdiagnoses in clinical practice. This study aimed to use a two-stage deep learning model, the Res-SE-Net model with the YOLOv8 algorithm, to facilitate efficient and reliable diagnosis of early lumbar spondylolisthesis based on lateral X-ray image identification. METHODS A total of 2424 lumbar lateral radiographs of patients treated in the Beijing Tongren Hospital between January 2021 and September 2023 were obtained. The data were labeled and mutually identified by 3 orthopedic surgeons after reshuffling in a random order and divided into a training set, validation set, and test set in a ratio of 7:2:1. We trained 2 models for automatic detection of spondylolisthesis. YOLOv8 model was used to detect the position of lumbar spondylolisthesis, and the Res-SE-Net classification method was designed to classify the clipped area and determine whether it was lumbar spondylolisthesis. The model performance was evaluated using a test set and an external dataset from Beijing Haidian Hospital. Finally, we compared model validation results with professional clinicians' evaluation. RESULTS The model achieved promising results, with a high diagnostic accuracy of 92.3%, precision of 93.5%, and recall of 93.1% for spondylolisthesis detection on the test set, the area under the curve (AUC) value was 0.934. CONCLUSIONS Our two-stage deep learning model provides doctors with a reference basis for the better diagnosis and treatment of early lumbar spondylolisthesis.
Collapse
Affiliation(s)
- Chunyang Xu
- Department of orthopedics, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xingyu Liu
- School of Life Sciences, Tsinghua University, Beijing, China; Institute of Biomedical and Health Engineering (iBHE), Tsinghua Shenzhen International Graduate School, Shenzhen, China; Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; Longwood Valley Medical Technology Co Ltd, Beijing, China
| | - Beixi Bao
- Department of orthopedics, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chang Liu
- Department of Minimally Invasive Spine Surgery, Beijing Haidian Hospital, Peking University, China
| | - Runchao Li
- Longwood Valley Medical Technology Co Ltd, Beijing, China
| | - Tianci Yang
- Department of orthopedics, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yukan Wu
- Department of orthopedics, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yiling Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; Longwood Valley Medical Technology Co Ltd, Beijing, China
| | - Jiaguang Tang
- Department of orthopedics, Beijing Tongren Hospital, Capital Medical University, Beijing, China.
| |
Collapse
|
18
|
Hamada T, Yasaka K, Nakai Y, Fukuda R, Hakuta R, Ishigaki K, Kanai S, Noguchi K, Oyama H, Saito T, Sato T, Suzuki T, Takahara N, Isayama H, Abe O, Fujishiro M. Computed tomography-based prediction of pancreatitis following biliary metal stent placement with the convolutional neural network. Endosc Int Open 2024; 12:E772-E780. [PMID: 38904060 PMCID: PMC11188753 DOI: 10.1055/a-2298-0147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 03/25/2024] [Indexed: 06/22/2024] Open
Abstract
Background and study aims Pancreatitis is a potentially lethal adverse event of endoscopic transpapillary placement of a self-expandable metal stent (SEMS) for malignant biliary obstruction (MBO). Deep learning-based image recognition has not been investigated in predicting pancreatitis in this setting. Patients and methods We included 70 patients who underwent endoscopic placement of a SEMS for nonresectable distal MBO. We constructed a convolutional neural network (CNN) model for pancreatitis prediction using a series of pre-procedure computed tomography images covering the whole pancreas (≥ 120,960 augmented images in total). We examined the additional effects of the CNN-based probabilities on the following machine learning models based on clinical parameters: logistic regression, support vector machine with a linear or RBF kernel, random forest classifier, and gradient boosting classifier. Model performance was assessed based on the area under the curve (AUC) in the receiver operating characteristic analysis, positive predictive value (PPV), accuracy, and specificity. Results The CNN model was associated with moderate levels of performance metrics: AUC, 0.67; PPV, 0.45; accuracy, 0.66; and specificity, 0.63. When added to the machine learning models, the CNN-based probabilities increased the performance metrics. The logistic regression model with the CNN-based probabilities had an AUC of 0.74, PPV of 0.85, accuracy of 0.83, and specificity of 0.96, compared with 0.72, 0.78, 0.77, and 0.96, respectively, without the probabilities. Conclusions The CNN-based model may increase predictability for pancreatitis following endoscopic placement of a biliary SEMS. Our findings support the potential of deep learning technology to improve prognostic models in pancreatobiliary therapeutic endoscopy.
Collapse
Affiliation(s)
- Tsuyoshi Hamada
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Department of Hepato-Biliary-Pancreatic Medicine, Cancer Institute Hospital, Japanese Foundation for Cancer Research, Tokyo, Japan
| | - Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Yousuke Nakai
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Department of Endoscopy and Endoscopic Surgery, The University of Tokyo Hospital, Tokyo, Japan
| | - Rintaro Fukuda
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Ryunosuke Hakuta
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Kazunaga Ishigaki
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Sachiko Kanai
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Kensaku Noguchi
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Hiroki Oyama
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Tomotaka Saito
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Tatsuya Sato
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Tatsunori Suzuki
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Naminatsu Takahara
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Hiroyuki Isayama
- Department of Gastroenterology, Graduate School of Medicine, Juntendo University, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Mitsuhiro Fujishiro
- Department of Gastroenterology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
19
|
Varghese C, Harrison EM, O'Grady G, Topol EJ. Artificial intelligence in surgery. Nat Med 2024; 30:1257-1268. [PMID: 38740998 DOI: 10.1038/s41591-024-02970-3] [Citation(s) in RCA: 37] [Impact Index Per Article: 37.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 04/03/2024] [Indexed: 05/16/2024]
Abstract
Artificial intelligence (AI) is rapidly emerging in healthcare, yet applications in surgery remain relatively nascent. Here we review the integration of AI in the field of surgery, centering our discussion on multifaceted improvements in surgical care in the preoperative, intraoperative and postoperative space. The emergence of foundation model architectures, wearable technologies and improving surgical data infrastructures is enabling rapid advances in AI interventions and utility. We discuss how maturing AI methods hold the potential to improve patient outcomes, facilitate surgical education and optimize surgical care. We review the current applications of deep learning approaches and outline a vision for future advances through multimodal foundation models.
Collapse
Affiliation(s)
- Chris Varghese
- Department of Surgery, University of Auckland, Auckland, New Zealand
| | - Ewen M Harrison
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, Edinburgh, UK
| | - Greg O'Grady
- Department of Surgery, University of Auckland, Auckland, New Zealand
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Eric J Topol
- Scripps Research Translational Institute, La Jolla, CA, USA.
| |
Collapse
|
20
|
Nagula S, Parasa S, Laine L, Shah SC. AGA Clinical Practice Update on High-Quality Upper Endoscopy: Expert Review. Clin Gastroenterol Hepatol 2024; 22:933-943. [PMID: 38385942 DOI: 10.1016/j.cgh.2023.10.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 10/30/2023] [Accepted: 10/31/2023] [Indexed: 02/23/2024]
Abstract
DESCRIPTION The purpose of this Clinical Practice Update (CPU) Expert Review is to provide clinicians with guidance on best practices for performing a high-quality upper endoscopic exam. METHODS The best practice advice statements presented herein were developed from a combination of available evidence from published literature, guidelines, and consensus-based expert opinion. No formal rating of the strength or quality of the evidence was carried out, which aligns with standard processes for American Gastroenterological Association (AGA) Institute CPUs. These statements are meant to provide practical, timely advice to clinicians practicing in the United States. This Expert Review was commissioned and approved by the American Gastroenterological Association (AGA) Institute Clinical Practice Updates (CPU) Committee and the AGA Governing Board to provide timely guidance on a topic of high clinical importance to the AGA membership, and underwent internal peer review by the CPU Committee and external peer review through standard procedures of Clinical Gastroenterology & Hepatology. BEST PRACTICE ADVICE 1: Endoscopists should ensure that upper endoscopy is being performed for an appropriate indication and that informed consent clearly explaining the risks, benefits, alternatives, sedation plan, and potential diagnostic and therapeutic interventions is obtained. These elements should be documented by the endoscopist before the procedure. BEST PRACTICE ADVICE 2: Endoscopists should ensure that adequate visualization of the upper gastrointestinal mucosa, using mucosal cleansing and insufflation as necessary, is achieved and documented. BEST PRACTICE ADVICE 3: A high-definition white-light endoscopy system should be used for upper endoscopy instead of a standard-definition white-light endoscopy system whenever possible. The endoscope used for the procedure should be documented in the procedure note. BEST PRACTICE ADVICE 4: Image enhancement technologies should be used during the upper endoscopic examination to improve the diagnostic yield for preneoplasia and neoplasia. Suspicious areas should be clearly described, photodocumented, and biopsied separately. BEST PRACTICE ADVICE 5: Endoscopists should spend sufficient time carefully inspecting the foregut mucosa in an anterograde and retroflexed view to improve the detection and characterization of abnormalities. BEST PRACTICE ADVICE 6: Endoscopists should document any abnormalities noted on upper endoscopy using established classifications and standard terminology whenever possible. BEST PRACTICE ADVICE 7: Endoscopists should perform biopsies for the evaluation and management of foregut conditions using standardized biopsy protocols. BEST PRACTICE ADVICE 8: Endoscopists should provide patients with management recommendations based on the specific endoscopic findings (eg, peptic ulcer disease, erosive esophagitis), and this should be documented in the medical record. If recommendations are contingent upon histopathology results (eg, H pylori infection, Barrett's esophagus), then endoscopists should document that appropriate guidance will be provided after results are available. BEST PRACTICE ADVICE 9: Endoscopists should document whether subsequent surveillance endoscopy is indicated and, if so, provide appropriate surveillance intervals. If the determination of surveillance is contingent on histopathology results, then endoscopists should document that surveillance intervals will be suggested after results are available.
Collapse
Affiliation(s)
- Satish Nagula
- Dr. Henry D. Janowitz Division of Gastroenterology, Icahn School of Medicine at Mount Sinai, New York, New York
| | | | - Loren Laine
- Section of Digestive Diseases, Yale School of Medicine, New Haven, Connecticut; Veterans Affairs Connecticut Healthcare System, West Haven, Connecticut
| | - Shailja C Shah
- Gastroenterology Section, Jennifer Moreno Department of Veterans Affairs Medical Center, San Diego, California; Division of Gastroenterology, University of California, San Diego, San Diego, California.
| |
Collapse
|
21
|
Lin J, Zhu S, Yin M, Xue H, Liu L, Liu X, Liu L, Xu C, Zhu J. Few-shot learning for the classification of intestinal tuberculosis and Crohn's disease on endoscopic images: A novel learn-to-learn framework. Heliyon 2024; 10:e26559. [PMID: 38404881 PMCID: PMC10884919 DOI: 10.1016/j.heliyon.2024.e26559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 02/11/2024] [Accepted: 02/15/2024] [Indexed: 02/27/2024] Open
Abstract
Background and aim Standard deep learning methods have been found inadequate in distinguishing between intestinal tuberculosis (ITB) and Crohn's disease (CD), a shortcoming largely attributed to the scarcity of available samples. In light of this limitation, our objective is to develop an innovative few-shot learning (FSL) system, specifically tailored for the efficient categorization and differential diagnosis of CD and ITB, using endoscopic image data with minimal sample requirements. Methods A total of 122 white-light endoscopic images (99 CD images and 23 ITB images) were collected (one ileum image from each patient). A 2-way, 3-shot FSL model that integrated dual transfer learning and metric learning strategies was devised. Xception architecture was selected as the foundation and then underwent a dual transfer process utilizing oesophagitis images sourced from HyperKvasir. Subsequently, the eigenvectors derived from the Xception for each query image were converted into predictive scores, which were calculated using the Euclidean distances to six reference images from the support sets. Results The FSL model, which leverages dual transfer learning, exhibited enhanced performance metrics (AUC 0.81) compared to a model relying on single transfer learning (AUC 0.56) across three evaluation rounds. Additionally, its performance surpassed that of a less experienced endoscopist (AUC 0.56) and even a more seasoned specialist (AUC 0.61). Conclusions The FSL model we have developed demonstrates efficacy in distinguishing between CD and ITB using a limited dataset of endoscopic imagery. FSL holds value for enhancing the diagnostic capabilities of rare conditions.
Collapse
Affiliation(s)
- Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, 215006, China
- Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu, 215006, China
| | - Shiqi Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, 215006, China
- Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu, 215006, China
| | - Minyue Yin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, 215006, China
- Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu, 215006, China
| | - Hongchen Xue
- School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu, 215006, China
| | - Lu Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, 215006, China
- Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu, 215006, China
| | - Xiaolin Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, 215006, China
- Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu, 215006, China
| | - Lihe Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, 215006, China
- Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu, 215006, China
| | - Chunfang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, 215006, China
- Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu, 215006, China
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, 215006, China
- Suzhou Clinical Centre of Digestive Diseases, Suzhou, Jiangsu, 215006, China
| |
Collapse
|
22
|
Ahn JC, Shah VH. Artificial intelligence in gastroenterology and hepatology. ARTIFICIAL INTELLIGENCE IN CLINICAL PRACTICE 2024:443-464. [DOI: 10.1016/b978-0-443-15688-5.00016-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
23
|
Dao HV, Nguyen BP, Nguyen TT, Lam HN, Nguyen TTH, Dang TT, Hoang LB, Le HQ, Dao LV. Application of artificial intelligence in gastrointestinal endoscopy in Vietnam: a narrative review. Ther Adv Gastrointest Endosc 2024; 17:26317745241306562. [PMID: 39734422 PMCID: PMC11672465 DOI: 10.1177/26317745241306562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Accepted: 11/25/2024] [Indexed: 12/31/2024] Open
Abstract
The utilization of artificial intelligence (AI) in gastrointestinal (GI) endoscopy has witnessed significant progress and promising results in recent years worldwide. From 2019 to 2023, the European Society of Gastrointestinal Endoscopy has released multiple guidelines/consensus with recommendations on integrating AI for detecting and classifying lesions in practical endoscopy. In Vietnam, since 2019, several preliminary studies have been conducted to develop AI algorithms for GI endoscopy, focusing on lesion detection. These studies have yielded high accuracy results ranging from 86% to 92%. For upper GI endoscopy, ongoing research directions comprise image quality assessment, detection of anatomical landmarks, simulating image-enhanced endoscopy, and semi-automated tools supporting the delineation of GI lesions on endoscopic images. For lower GI endoscopy, most studies focus on developing AI algorithms for colorectal polyps' detection and classification based on the risk of malignancy. In conclusion, the application of AI in this field represents a promising research direction, presenting challenges and opportunities for real-world implementation within the Vietnamese healthcare context.
Collapse
Affiliation(s)
- Hang Viet Dao
- Research and Education Department, Institute of Gastroenterology and Hepatology, 09 Dao Duy Anh Street, Dong Da District, Hanoi City, Vietnam
- Department of Internal Medicine, Hanoi Medical University, Hanoi, Vietnam
- Endoscopy Center, Hanoi Medical University Hospital, Hanoi, Vietnam
| | | | | | - Hoa Ngoc Lam
- Institute of Gastroenterology and Hepatology, Hanoi, Vietnam
| | | | - Thao Thi Dang
- Institute of Gastroenterology and Hepatology, Hanoi, Vietnam
| | - Long Bao Hoang
- Institute of Gastroenterology and Hepatology, Hanoi, Vietnam
| | - Hung Quang Le
- Endoscopy Center, Hanoi Medical University Hospital, Hanoi, Vietnam
| | - Long Van Dao
- Department of Internal Medicine, Hanoi Medical University, Hanoi, Vietnam
- Endoscopy Center, Hanoi Medical University Hospital, Hanoi, Vietnam
- Institute of Gastroenterology and Hepatology, Hanoi, Vietnam
| |
Collapse
|
24
|
Zhang L, Yao L, Lu Z, Yu H. Current status of quality control in screening esophagogastroduodenoscopy and the emerging role of artificial intelligence. Dig Endosc 2024; 36:5-15. [PMID: 37522555 DOI: 10.1111/den.14649] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 07/27/2023] [Indexed: 08/01/2023]
Abstract
Esophagogastroduodenoscopy (EGD) screening is being implemented in countries with a high incidence of upper gastrointestinal (UGI) cancer. High-quality EGD screening ensures the yield of early diagnosis and prevents suffering from advanced UGI cancer and minimal operational-related discomfort. However, performance varied dramatically among endoscopists, and quality control for EGD screening remains suboptimal. Guidelines have recommended potential measures for endoscopy quality improvement and research has been conducted for evidence. Moreover, artificial intelligence offers a promising solution for computer-aided diagnosis and quality control during EGD examinations. In this review, we summarized the key points for quality assurance in EGD screening based on current guidelines and evidence. We also outline the latest evidence, limitations, and future prospects of the emerging role of artificial intelligence in EGD quality control, aiming to provide a foundation for improving the quality of EGD screening.
Collapse
Affiliation(s)
- Lihui Zhang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Liwen Yao
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Zihua Lu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Honggang Yu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| |
Collapse
|
25
|
Xin Y, Zhang Q, Liu X, Li B, Mao T, Li X. Application of artificial intelligence in endoscopic gastrointestinal tumors. Front Oncol 2023; 13:1239788. [PMID: 38144533 PMCID: PMC10747923 DOI: 10.3389/fonc.2023.1239788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 11/17/2023] [Indexed: 12/26/2023] Open
Abstract
With an increasing number of patients with gastrointestinal cancer, effective and accurate early diagnostic clinical tools are required provide better health care for patients with gastrointestinal cancer. Recent studies have shown that artificial intelligence (AI) plays an important role in the diagnosis and treatment of patients with gastrointestinal tumors, which not only improves the efficiency of early tumor screening, but also significantly improves the survival rate of patients after treatment. With the aid of efficient learning and judgment abilities of AI, endoscopists can improve the accuracy of diagnosis and treatment through endoscopy and avoid incorrect descriptions or judgments of gastrointestinal lesions. The present article provides an overview of the application status of various artificial intelligence in gastric and colorectal cancers in recent years, and the direction of future research and clinical practice is clarified from a clinical perspective to provide a comprehensive theoretical basis for AI as a promising diagnostic and therapeutic tool for gastrointestinal cancer.
Collapse
Affiliation(s)
| | | | | | | | | | - Xiaoyu Li
- Department of Gastroenterology, The Affiliated Hospital of Qingdao University, Qingdao, China
| |
Collapse
|
26
|
Klang E, Sourosh A, Nadkarni GN, Sharif K, Lahat A. Deep Learning and Gastric Cancer: Systematic Review of AI-Assisted Endoscopy. Diagnostics (Basel) 2023; 13:3613. [PMID: 38132197 PMCID: PMC10742887 DOI: 10.3390/diagnostics13243613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/23/2023] [Accepted: 12/02/2023] [Indexed: 12/23/2023] Open
Abstract
BACKGROUND Gastric cancer (GC), a significant health burden worldwide, is typically diagnosed in the advanced stages due to its non-specific symptoms and complex morphological features. Deep learning (DL) has shown potential for improving and standardizing early GC detection. This systematic review aims to evaluate the current status of DL in pre-malignant, early-stage, and gastric neoplasia analysis. METHODS A comprehensive literature search was conducted in PubMed/MEDLINE for original studies implementing DL algorithms for gastric neoplasia detection using endoscopic images. We adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The focus was on studies providing quantitative diagnostic performance measures and those comparing AI performance with human endoscopists. RESULTS Our review encompasses 42 studies that utilize a variety of DL techniques. The findings demonstrate the utility of DL in GC classification, detection, tumor invasion depth assessment, cancer margin delineation, lesion segmentation, and detection of early-stage and pre-malignant lesions. Notably, DL models frequently matched or outperformed human endoscopists in diagnostic accuracy. However, heterogeneity in DL algorithms, imaging techniques, and study designs precluded a definitive conclusion about the best algorithmic approach. CONCLUSIONS The promise of artificial intelligence in improving and standardizing gastric neoplasia detection, diagnosis, and segmentation is significant. This review is limited by predominantly single-center studies and undisclosed datasets used in AI training, impacting generalizability and demographic representation. Further, retrospective algorithm training may not reflect actual clinical performance, and a lack of model details hinders replication efforts. More research is needed to substantiate these findings, including larger-scale multi-center studies, prospective clinical trials, and comprehensive technical reporting of DL algorithms and datasets, particularly regarding the heterogeneity in DL algorithms and study designs.
Collapse
Affiliation(s)
- Eyal Klang
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (A.S.); (G.N.N.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
- ARC Innovation Center, Sheba Medical Center, Affiliated with Tel Aviv University Medical School, Tel Hashomer, Ramat Gan 52621, Tel Aviv, Israel
| | - Ali Sourosh
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (A.S.); (G.N.N.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Girish N. Nadkarni
- Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA (A.S.); (G.N.N.)
- The Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Kassem Sharif
- Department of Gastroenterology, Sheba Medical Center, Affiliated with Tel Aviv University Medical School, Tel Hashomer, Ramat Gan 52621, Tel Aviv, Israel;
| | - Adi Lahat
- Department of Gastroenterology, Sheba Medical Center, Affiliated with Tel Aviv University Medical School, Tel Hashomer, Ramat Gan 52621, Tel Aviv, Israel;
| |
Collapse
|
27
|
Fuse Y, Takeuchi K, Hashimoto N, Nagata Y, Takagi Y, Nagatani T, Takeuchi I, Saito R. Deep learning based identification of pituitary adenoma on surgical endoscopic images: a pilot study. Neurosurg Rev 2023; 46:291. [PMID: 37910280 DOI: 10.1007/s10143-023-02196-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 09/21/2023] [Accepted: 10/22/2023] [Indexed: 11/03/2023]
Abstract
Accurate tumor identification during surgical excision is necessary for neurosurgeons to determine the extent of resection without damaging the surrounding tissues. No conventional technologies have achieved reliable performance for pituitary adenomas. This study proposes a deep learning approach using intraoperative endoscopic images to discriminate pituitary adenomas from non-tumorous tissue inside the sella turcica. Static images were extracted from 50 intraoperative videos of patients with pituitary adenomas. All patients underwent endoscopic transsphenoidal surgery with a 4 K ultrahigh-definition endoscope. The tumor and non-tumorous tissue within the sella turcica were delineated on static images. Using intraoperative images, we developed and validated deep learning models to identify tumorous tissue. Model performance was evaluated using a fivefold per-patient methodology. As a proof-of-concept, the model's predictions were pathologically cross-referenced with a medical professional's diagnosis using the intraoperative images of a prospectively enrolled patient. In total, 605 static images were obtained. Among the cropped 117,223 patches, 58,088 were labeled as tumors, while the remaining 59,135 were labeled as non-tumorous tissues. The evaluation of the image dataset revealed that the wide-ResNet model had the highest accuracy of 0.768, with an F1 score of 0.766. A preliminary evaluation on one patient indicated alignment between the ground truth set by neurosurgeons, the model's predictions, and histopathological findings. Our deep learning algorithm has a positive tumor discrimination performance in intraoperative 4-K endoscopic images in patients with pituitary adenomas.
Collapse
Affiliation(s)
- Yutaro Fuse
- Department of Neurosurgery, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, 466-8550, Japan
- Academia-Industry Collaboration Platform for Cultivating Medical AI Leaders (AI-MAILs), Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Kazuhito Takeuchi
- Department of Neurosurgery, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, 466-8550, Japan.
| | | | - Yuichi Nagata
- Department of Neurosurgery, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, 466-8550, Japan
| | - Yusuke Takagi
- Department of Computer Science, Nagoya Institute of Technology, Nagoya, Japan
| | - Tetsuya Nagatani
- Department of Neurosurgery, Japanese Red Cross Aichi Medical Center Nagoya Daini Hospital, Nagoya, Japan
| | - Ichiro Takeuchi
- RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
- Department of Mechanical Systems Engineering, Graduate School of Engineering, Nagoya University, Nagoya, Japan
| | - Ryuta Saito
- Department of Neurosurgery, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, 466-8550, Japan
| |
Collapse
|
28
|
Dong Z, Tao X, Du H, Wang J, Huang L, He C, Zhao Z, Mao X, Ai Y, Zhang B, Liu M, Xu H, Jiang Z, Sun Y, Li X, Liu Z, Chen J, Song Y, Liu G, Luo C, Li Y, Zeng X, Liu J, Zhu Y, Wu L, Yu H. Exploring the challenge of early gastric cancer diagnostic AI system face in multiple centers and its potential solutions. J Gastroenterol 2023; 58:978-989. [PMID: 37515597 DOI: 10.1007/s00535-023-02025-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 07/10/2023] [Indexed: 07/31/2023]
Abstract
BACKGROUND Artificial intelligence (AI) performed variously among test sets with different diversity due to sample selection bias, which can be stumbling block for AI applications. We previously tested AI named ENDOANGEL, diagnosing early gastric cancer (EGC) on single-center videos in man-machine competition. We aimed to re-test ENDOANGEL on multi-center videos to explore challenges applying AI in multiple centers, then upgrade ENDOANGEL and explore solutions to the challenge. METHODS ENDOANGEL was re-tested on multi-center videos retrospectively collected from 12 institutions and compared with performance in previously reported single-center videos. We then upgraded ENDOANGEL to ENDOANGEL-2022 with more training samples and novel algorithms and conducted competition between ENDOANGEL-2022 and endoscopists. ENDOANGEL-2022 was then tested on single-center videos and compared with performance in multi-center videos; the two AI systems were also compared with each other and endoscopists. RESULTS Forty-six EGCs and 54 non-cancers were included in multi-center video cohort. On diagnosing EGCs, compared with single-center videos, ENDOANGEL showed stable sensitivity (97.83% vs. 100.00%) while sharply decreased specificity (61.11% vs. 82.54%); ENDOANGEL-2022 showed similar tendency while achieving significantly higher specificity (79.63%, p < 0.01) making fewer mistakes on typical lesions than ENDOANGEL. On detecting gastric neoplasms, both AI showed stable sensitivity while sharply decreased specificity. Nevertheless, both AI outperformed endoscopists in the two competitions. CONCLUSIONS Great increase of false positives is a prominent challenge for applying EGC diagnostic AI in multiple centers due to high heterogeneity of negative cases. Optimizing AI by adding samples and using novel algorithms is promising to overcome this challenge.
Collapse
Affiliation(s)
- Zehua Dong
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiao Tao
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Hongliu Du
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Junxiao Wang
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Li Huang
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Chiyi He
- Department of Gastroenterology, Yijishan Hospital of Wannan Medical College, Wuhu, 241001, Anhui, People's Republic of China
| | - Zhifeng Zhao
- Department of Digestive Endoscopy, The Fourth Hospital of China Medical University, Shenyang, 110032, Liaoning Province, People's Republic of China
| | - Xinli Mao
- Department of Gastroenterology, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, Zhejiang, China
| | - Yaowei Ai
- Department of Gastroenterology, The People's Hospital of China Three Gorges University, The First People's Hospital of Yichang, Yichang, China
| | - Beiping Zhang
- Department of Gastroenterology, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Mei Liu
- Department of Gastroenterology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Hong Xu
- Department of Endoscopy, The First Hospital of Jilin University, Changchun, China
| | - Zhenyu Jiang
- Department of Gastroenterology, The Second Affiliated Hospital of Baotou Medical College, Baotou, Inner Mongolia, China
| | - Yunwei Sun
- Department of Gastroenterology, Ruijin Hospital, Shanghai Jiaotong University, Gubei Branch, Shanghai, People's Republic of China
| | - Xiuling Li
- Department of Gastroenterology, School of Clinical Medicine, Henan Provincial People's Hospital, People's Hospital of Zhengzhou University, Henan University, Zhengzhou, Henan, China
| | - Zhihong Liu
- Department of Gastroenterology, Jilin City People's Hospital, Jilin, China
| | - Jinzhong Chen
- Endoscopy Center, School of Medicine, The First Affiliated Hospital of Xiamen University, Xiamen University, Xiamen, China
| | - Ying Song
- Department of Gastroenterology, Xi'an Gaoxin Hospital, Xi'an, 710032, Shaanxi Province, China
| | - Guowei Liu
- Yi Xin Clinic, Changzhou, Jiangsu, China
| | - Chaijie Luo
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Yanxia Li
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiaoquan Zeng
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Jun Liu
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Yijie Zhu
- Renmin Hospital of Wuhan University, Wuhan, China
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Lianlian Wu
- Renmin Hospital of Wuhan University, Wuhan, China.
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China.
- Department of Gastroenterology, Renmin Hospital of Wuhan University, 99 Zhangzhidong Road, Wuhan, 430060, Hubei Province, China.
| | - Honggang Yu
- Renmin Hospital of Wuhan University, Wuhan, China.
- Key Laboratory of Hubei Province for Digestive System Disease, Renmin Hospital of Wuhan University, Wuhan, China.
- Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China.
- Department of Gastroenterology, Renmin Hospital of Wuhan University, 99 Zhangzhidong Road, Wuhan, 430060, Hubei Province, China.
| |
Collapse
|
29
|
Kiran N, Sapna F, Kiran F, Kumar D, Raja F, Shiwlani S, Paladini A, Sonam F, Bendari A, Perkash RS, Anjali F, Varrassi G. Digital Pathology: Transforming Diagnosis in the Digital Age. Cureus 2023; 15:e44620. [PMID: 37799211 PMCID: PMC10547926 DOI: 10.7759/cureus.44620] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 09/03/2023] [Indexed: 10/07/2023] Open
Abstract
In the context of rapid technological advancements, the narrative review titled "Digital Pathology: Transforming Diagnosis in the Digital Age" explores the significant impact of digital pathology in reshaping diagnostic approaches. This review delves into the various effects of the field, including remote consultations and artificial intelligence (AI)-assisted analysis, revealing the ongoing transformation taking place. The investigation explores the process of digitizing traditional glass slides, which aims to improve accessibility and facilitate sharing. Additionally, it addresses the complexities associated with data security and standardization challenges. Incorporating AI enhances pathologists' diagnostic capabilities and accelerates analytical procedures. Furthermore, the review highlights the growing importance of collaborative networks facilitating global knowledge sharing. It also emphasizes the significant impact of this technology on medical education and patient care. This narrative review aims to provide an overview of digital pathology's transformative and innovative potential, highlighting its disruptive nature in reshaping diagnostic practices.
Collapse
Affiliation(s)
- Nfn Kiran
- Pathology and Laboratory Medicine, Staten Island University Hospital, New York, USA
| | - Fnu Sapna
- Pathology and Laboratory Medicine, Albert Einstein College of Medicine, New York, USA
| | - Fnu Kiran
- Pathology and Laboratory Medicine, University of Missouri School of Medicine, Columbia, USA
| | - Deepak Kumar
- Pathology and Laboratory Medicine, University of Missouri, Columbia, USA
| | - Fnu Raja
- Pathology and Laboratory Medicine, MetroHealth Medical Center, Cleveland, USA
| | - Sheena Shiwlani
- Pathology and Laboratory Medicine, Isra University, Karachi, PAK
- Pathology, Mount Sinai Hospital, New York, USA
| | - Antonella Paladini
- Clinical Medicine, Public Health and Life Science (MESVA), University of L'Aquila, L'Aquila, ITA
| | - Fnu Sonam
- Pathology and Laboratory Medicine, Liaquat University of Medical and Health Sciences, Sukkur, PAK
- Medicine, Mustafai Trust Central Hospital, Sukkur, PAK
| | - Ahmed Bendari
- Pathology and Laboratory Medicine, Lenox Hill Hospital, New York, USA
| | | | - Fnu Anjali
- Internal Medicine, Sakhi Baba General Hospital, Sukkur, PAK
| | | |
Collapse
|
30
|
Zhang X, Tang D, Zhou JD, Ni M, Yan P, Zhang Z, Yu T, Zhan Q, Shen Y, Zhou L, Zheng R, Zou X, Zhang B, Li WJ, Wang L. A real-time interpretable artificial intelligence model for the cholangioscopic diagnosis of malignant biliary stricture (with videos). Gastrointest Endosc 2023; 98:199-210.e10. [PMID: 36849057 DOI: 10.1016/j.gie.2023.02.026] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 01/22/2023] [Accepted: 02/20/2023] [Indexed: 03/01/2023]
Abstract
BACKGROUND AND AIMS It is crucial to accurately determine malignant biliary strictures (MBSs) for early curative treatment. This study aimed to develop a real-time interpretable artificial intelligence (AI) system to predict MBSs under digital single-operator cholangioscopy (DSOC). METHODS A novel interpretable AI system called MBSDeiT was developed consisting of 2 models to identify qualified images and then predict MBSs in real time. The overall efficiency of MBSDeiT was validated at the image level on internal, external, and prospective testing data sets and subgroup analyses, and at the video level on the prospective data sets; these findings were compared with those of the endoscopists. The association between AI predictions and endoscopic features was evaluated to increase the interpretability. RESULTS MBSDeiT can first automatically select qualified DSOC images with an area under the curve (AUC) of .963 and .968 to .973 on the internal testing data set and the external testing data sets, and then identify MBSs with an AUC of .971 on the internal testing data set, an AUC of .978 to .999 on the external testing data sets, and an AUC of .976 on the prospective testing data set, respectively. MBSDeiT accurately identified 92.3% of MBSs in prospective testing videos. Subgroup analyses confirmed the stability and robustness of MBSDeiT. The AI system achieved superior performance to that of expert and novice endoscopists. The AI predictions were significantly associated with 4 endoscopic features (nodular mass, friability, raised intraductal lesion, and abnormal vessels; P < .05) under DSOC, which is consistent with the endoscopists' predictions. CONCLUSIONS The study findings suggest that MBSDeiT could be a promising approach for the accurate diagnosis of MBSs under DSOC.
Collapse
Affiliation(s)
- Xiang Zhang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Clinical College of Nanjing Medical University, Nanjing Medical University, Nanjing, Jiangsu, China
| | - Dehua Tang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Clinical College of Nanjing Medical University, Nanjing Medical University, Nanjing, Jiangsu, China; Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu, China
| | - Jin-Dong Zhou
- National Institute of Healthcare Data Science at Nanjing University, Nanjing, Jiangsu, China; National Key Laboratory for Novel Software Technology, Department of Computer Science and Technology, Nanjing University, Nanjing, Jiangsu, China
| | - Muhan Ni
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu, China
| | - Peng Yan
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu, China
| | - Zhenyu Zhang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu, China
| | - Tao Yu
- Departments of Gastroenterology, Qilu Hospital of Shandong University, Jinan, Shandong, China
| | - Qiang Zhan
- Department of Gastroenterology, Wuxi People's Hospital Affiliated to Nanjing Medical University, Wuxi, Jiangsu, China
| | - Yonghua Shen
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Clinical College of Nanjing Medical University, Nanjing Medical University, Nanjing, Jiangsu, China; Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu, China
| | - Lin Zhou
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Clinical College of Nanjing Medical University, Nanjing Medical University, Nanjing, Jiangsu, China; Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu, China
| | - Ruhua Zheng
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Clinical College of Nanjing Medical University, Nanjing Medical University, Nanjing, Jiangsu, China; Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu, China
| | - Xiaoping Zou
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Clinical College of Nanjing Medical University, Nanjing Medical University, Nanjing, Jiangsu, China; Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu, China; Department of Gastroenterology, Taikang Xianlin Drum Tower Hospital, Nanjing, Jiangsu, China
| | - Bin Zhang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Clinical College of Nanjing Medical University, Nanjing Medical University, Nanjing, Jiangsu, China; Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu, China.
| | - Wu-Jun Li
- National Institute of Healthcare Data Science at Nanjing University, Nanjing, Jiangsu, China; National Key Laboratory for Novel Software Technology, Department of Computer Science and Technology, Nanjing University, Nanjing, Jiangsu, China; Center for Medical Big Data, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu, China.
| | - Lei Wang
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Clinical College of Nanjing Medical University, Nanjing Medical University, Nanjing, Jiangsu, China; Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu, China.
| |
Collapse
|
31
|
Wang Q, Liu Y, Li Z, Tang Y, Long W, Xin H, Huang X, Zhou S, Wang L, Liang B, Li Z, Xu M. Establishment of a novel lysosomal signature for the diagnosis of gastric cancer with in-vitro and in-situ validation. Front Immunol 2023; 14:1182277. [PMID: 37215115 PMCID: PMC10196375 DOI: 10.3389/fimmu.2023.1182277] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 04/21/2023] [Indexed: 05/24/2023] Open
Abstract
Background Gastric cancer (GC) represents a malignancy with a multi-factorial combination of genetic, environmental, and microbial factors. Targeting lysosomes presents significant potential in the treatment of numerous diseases, while lysosome-related genetic markers for early GC detection have not yet been established, despite implementing this process by assembling artificial intelligence algorithms would greatly break through its value in translational medicine, particularly for immunotherapy. Methods To this end, this study, by utilizing the transcriptomic as well as single cell data and integrating 20 mainstream machine-learning (ML) algorithms. We optimized an AI-based predictor for GC diagnosis. Then, the reliability of the model was initially confirmed by the results of enrichment analyses currently in use. And the immunological implications of the genes comprising the predictor was explored and response of GC patients were evaluated to immunotherapy and chemotherapy. Further, we performed systematic laboratory work to evaluate the build-up of the central genes, both at the expression stage and at the functional aspect, by which we could also demonstrate the reliability of the model to guide cancer immunotherapy. Results Eight lysosomal-related genes were selected for predictive model construction based on the inclusion of RMSE as a reference standard and RF algorithm for ranking, namely ADRB2, KCNE2, MYO7A, IFI30, LAMP3, TPP1, HPS4, and NEU4. Taking into account accuracy, precision, recall, and F1 measurements, a preliminary determination of our study was carried out by means of applying the extra tree and random forest algorithms, incorporating the ROC-AUC value as a consideration, the Extra Tree model seems to be the optimal option with the AUC value of 0.92. The superiority of diagnostic signature is also reflected in the analysis of immune features. Conclusion In summary, this study is the first to integrate around 20 mainstream ML algorithms to construct an AI-based diagnostic predictor for gastric cancer based on lysosomal-related genes. This model will facilitate the accurate prediction of early gastric cancer incidence and the subsequent risk assessment or precise individualized immunotherapy, thus improving the survival prognosis of GC patients.
Collapse
Affiliation(s)
- Qi Wang
- Department of Gastroenterology, Affiliated Hospital of Jiangsu University, Jiangsu University, Zhenjiang, China
| | - Ying Liu
- Department of Cardiology, Sixth Medical Center, PLA General Hospital, Beijing, China
| | - Zhangzuo Li
- Department of Cell Biology, School of Medicine, Jiangsu University, Zhenjiang, China
| | - Yidan Tang
- Faculty of Medicine, University of Debrecen, Debrecen, Hungary
| | - Weiguo Long
- Department of Pathology, Affiliated Hospital of Jiangsu University, Jiangsu University, Zhenjiang, China
| | - Huaiyu Xin
- Department of Gastroenterology, Affiliated Hospital of Jiangsu University, Jiangsu University, Zhenjiang, China
| | - Xufeng Huang
- Faculty of Dentistry, University of Debrecen, Debrecen, Hungary
| | - Shujing Zhou
- Faculty of Medicine, University of Debrecen, Debrecen, Hungary
| | - Longbin Wang
- Department of Clinical Veterinary Medicine, Huazhong Agricultural University, Wuhan, China
| | - Bochuan Liang
- Faculty of Chinese Medicine, Nanchang Medical College, Nanchang, China
| | - Zhengrui Li
- Department of Oral and Maxillofacial-Head and Neck Oncology, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology and National Clinical Research Center for Oral Diseases, Shanghai JiaoTong University, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Shanghai JiaoTong University, Shanghai, China
| | - Min Xu
- Department of Gastroenterology, Affiliated Hospital of Jiangsu University, Jiangsu University, Zhenjiang, China
| |
Collapse
|
32
|
Martins BC, Moura RN, Kum AST, Matsubayashi CO, Marques SB, Safatle-Ribeiro AV. Endoscopic Imaging for the Diagnosis of Neoplastic and Pre-Neoplastic Conditions of the Stomach. Cancers (Basel) 2023; 15:cancers15092445. [PMID: 37173912 PMCID: PMC10177554 DOI: 10.3390/cancers15092445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 04/20/2023] [Accepted: 04/21/2023] [Indexed: 05/15/2023] Open
Abstract
Gastric cancer is an aggressive disease with low long-term survival rates. An early diagnosis is essential to offer a better prognosis and curative treatment. Upper gastrointestinal endoscopy is the main tool for the screening and diagnosis of patients with gastric pre-neoplastic conditions and early lesions. Image-enhanced techniques such as conventional chromoendoscopy, virtual chromoendoscopy, magnifying imaging, and artificial intelligence improve the diagnosis and the characterization of early neoplastic lesions. In this review, we provide a summary of the currently available recommendations for the screening, surveillance, and diagnosis of gastric cancer, focusing on novel endoscopy imaging technologies.
Collapse
Affiliation(s)
- Bruno Costa Martins
- Endoscopy Unit, Instituto do Cancer do Estado de São Paulo, University of São Paulo, São Paulo 01246-000, Brazil
- Fleury Medicina e Saude, São Paulo 01333-010, Brazil
| | - Renata Nobre Moura
- Endoscopy Unit, Instituto do Cancer do Estado de São Paulo, University of São Paulo, São Paulo 01246-000, Brazil
- Fleury Medicina e Saude, São Paulo 01333-010, Brazil
| | - Angelo So Taa Kum
- Endoscopy Unit, Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, University of São Paulo, São Paulo 05403-010, Brazil
| | - Carolina Ogawa Matsubayashi
- Endoscopy Unit, Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, University of São Paulo, São Paulo 05403-010, Brazil
| | - Sergio Barbosa Marques
- Fleury Medicina e Saude, São Paulo 01333-010, Brazil
- Endoscopy Unit, Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, University of São Paulo, São Paulo 05403-010, Brazil
| | - Adriana Vaz Safatle-Ribeiro
- Endoscopy Unit, Instituto do Cancer do Estado de São Paulo, University of São Paulo, São Paulo 01246-000, Brazil
| |
Collapse
|
33
|
A deep-learning based system using multi-modal data for diagnosing gastric neoplasms in real-time (with video). Gastric Cancer 2023; 26:275-285. [PMID: 36520317 DOI: 10.1007/s10120-022-01358-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 11/25/2022] [Indexed: 12/23/2022]
Abstract
BACKGROUND White light (WL) and weak-magnifying (WM) endoscopy are both important methods for diagnosing gastric neoplasms. This study constructed a deep-learning system named ENDOANGEL-MM (multi-modal) aimed at real-time diagnosing gastric neoplasms using WL and WM data. METHODS WL and WM images of a same lesion were combined into image-pairs. A total of 4201 images, 7436 image-pairs, and 162 videos were used for model construction and validation. Models 1-5 including two single-modal models (WL, WM) and three multi-modal models (data fusion on task-level, feature-level, and input-level) were constructed. The models were tested on three levels including images, videos, and prospective patients. The best model was selected for constructing ENDOANGEL-MM. We compared the performance between the models and endoscopists and conducted a diagnostic study to explore the ENDOANGEL-MM's assistance ability. RESULTS Model 4 (ENDOANGEL-MM) showed the best performance among five models. Model 2 performed better in single-modal models. The accuracy of ENDOANGEL-MM was higher than that of Model 2 in still images, real-time videos, and prospective patients. (86.54 vs 78.85%, P = 0.134; 90.00 vs 85.00%, P = 0.179; 93.55 vs 70.97%, P < 0.001). Model 2 and ENDOANGEL-MM outperformed endoscopists on WM data (85.00 vs 71.67%, P = 0.002) and multi-modal data (90.00 vs 76.17%, P = 0.002), significantly. With the assistance of ENDOANGEL-MM, the accuracy of non-experts improved significantly (85.75 vs 70.75%, P = 0.020), and performed no significant difference from experts (85.75 vs 89.00%, P = 0.159). CONCLUSIONS The multi-modal model constructed by feature-level fusion showed the best performance. ENDOANGEL-MM identified gastric neoplasms with good accuracy and has a potential role in real-clinic.
Collapse
|
34
|
Liu Y, Wen H, Wang Q, Du S. Research trends in endoscopic applications in early gastric cancer: A bibliometric analysis of studies published from 2012 to 2022. Front Oncol 2023; 13:1124498. [PMID: 37114137 PMCID: PMC10129370 DOI: 10.3389/fonc.2023.1124498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 03/13/2023] [Indexed: 04/29/2023] Open
Abstract
Background Endoscopy is the optimal method of diagnosing and treating early gastric cancer (EGC), and it is therefore important to keep up with the rapid development of endoscopic applications in EGC. This study utilized bibliometric analysis to describe the development, current research progress, hotspots, and emerging trends in this field. Methods We retrieved publications about endoscopic applications in EGC from 2012 to 2022 from Web of Science™ (Clarivate™, Philadelphia, PA, USA) Core Collection (WoSCC). We mainly used CiteSpace (version 6.1.R3) and VOSviewer (version 1.6.18) to perform the collaboration network analysis, co-cited analysis, co-occurrence analysis, cluster analysis, and burst detection. Results A total of 1,333 publications were included. Overall, both the number of publications and the average number of citations per document per year increased annually. Among the 52 countries/regions that were included, Japan contributed the most in terms of publications, citations, and H-index, followed by the Republic of Korea and China. The National Cancer Center, based in both Japan and the Republic of Korea, ranked first among institutions in terms of number of publications, citation impact, and the average number of citations. Yong Chan Lee was the most productive author, and Ichiro Oda had the highest citation impact. In terms of cited authors, Gotoda Takuji had both the highest citation impact and the highest centrality. Among journals, Surgical Endoscopy and Other Interventional Techniques had the most publications, and Gastric Cancer had the highest citation impact and H-index. Among all publications and cited references, a paper by Smyth E C et al., followed by one by Gotoda T et al., had the highest citation impact. Using keywords co-occurrence and cluster analysis, 1,652 author keywords were categorized into 26 clusters, and we then divided the clusters into six groups. The largest and newest clusters were endoscopic submucosal dissection and artificial intelligence (AI), respectively. Conclusions Over the last decade, research into endoscopic applications in EGC has gradually increased. Japan and the Republic of Korea have contributed the most, but research in this field in China, from an initially low base, is developing at a striking speed. However, a lack of collaboration among countries, institutions, and authors, is common, and this should be addressed in future. The main focus of research in this field (i.e., the largest cluster) is endoscopic submucosal dissection, and the topic at the frontier (i.e., the newest cluster) is AI. Future research should focus on the application of AI in endoscopy, and its implications for the clinical diagnosis and treatment of EGC.
Collapse
Affiliation(s)
- Yuan Liu
- Graduate School of Beijing University of Chinese Medicine, Beijing, China
| | - Haolang Wen
- Graduate School of Beijing University of Chinese Medicine, Beijing, China
| | - Qiao Wang
- Graduate School of Beijing University of Chinese Medicine, Beijing, China
| | - Shiyu Du
- Department of Gastroenterology, China-Japan Friendship Hospital, Beijing, China
- *Correspondence: Shiyu Du,
| |
Collapse
|
35
|
Development and Validation of Deep Learning Models for the Multiclassification of Reflux Esophagitis Based on the Los Angeles Classification. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:7023731. [PMID: 36852218 PMCID: PMC9966565 DOI: 10.1155/2023/7023731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 08/16/2022] [Accepted: 02/06/2023] [Indexed: 02/20/2023]
Abstract
This study is to evaluate the feasibility of deep learning (DL) models in the multiclassification of reflux esophagitis (RE) endoscopic images, according to the Los Angeles (LA) classification for the first time. The images were divided into three groups, namely, normal, LA classification A + B, and LA C + D. The images from the HyperKvasir dataset and Suzhou hospital were divided into the training and validation datasets as a ratio of 4 : 1, while the images from Jintan hospital were the independent test set. The CNNs- or Transformer-architectures models (MobileNet, ResNet, Xception, EfficientNet, ViT, and ConvMixer) were transfer learning via Keras. The visualization of the models was proposed using Gradient-weighted Class Activation Mapping (Grad-CAM). Both in the validation set and the test set, the EfficientNet model showed the best performance as follows: accuracy (0.962 and 0.957), recall for LA A + B (0.970 and 0.925) and LA C + D (0.922 and 0.930), Marco-recall (0.946 and 0.928), Matthew's correlation coefficient (0.936 and 0.884), and Cohen's kappa (0.910 and 0.850), which was better than the other models and the endoscopists. According to the EfficientNet model, the Grad-CAM was plotted and highlighted the target lesions on the original images. This study developed a series of DL-based computer vision models with the interpretable Grad-CAM to evaluate the feasibility in the multiclassification of RE endoscopic images. It firstly suggests that DL-based classifiers show promise in the endoscopic diagnosis of esophagitis.
Collapse
|
36
|
Renna F, Martins M, Neto A, Cunha A, Libânio D, Dinis-Ribeiro M, Coimbra M. Artificial Intelligence for Upper Gastrointestinal Endoscopy: A Roadmap from Technology Development to Clinical Practice. Diagnostics (Basel) 2022; 12:1278. [PMID: 35626433 PMCID: PMC9141387 DOI: 10.3390/diagnostics12051278] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 05/14/2022] [Accepted: 05/18/2022] [Indexed: 02/05/2023] Open
Abstract
Stomach cancer is the third deadliest type of cancer in the world (0.86 million deaths in 2017). In 2035, a 20% increase will be observed both in incidence and mortality due to demographic effects if no interventions are foreseen. Upper GI endoscopy (UGIE) plays a paramount role in early diagnosis and, therefore, improved survival rates. On the other hand, human and technical factors can contribute to misdiagnosis while performing UGIE. In this scenario, artificial intelligence (AI) has recently shown its potential in compensating for the pitfalls of UGIE, by leveraging deep learning architectures able to efficiently recognize endoscopic patterns from UGIE video data. This work presents a review of the current state-of-the-art algorithms in the application of AI to gastroscopy. It focuses specifically on the threefold tasks of assuring exam completeness (i.e., detecting the presence of blind spots) and assisting in the detection and characterization of clinical findings, both gastric precancerous conditions and neoplastic lesion changes. Early and promising results have already been obtained using well-known deep learning architectures for computer vision, but many algorithmic challenges remain in achieving the vision of AI-assisted UGIE. Future challenges in the roadmap for the effective integration of AI tools within the UGIE clinical practice are discussed, namely the adoption of more robust deep learning architectures and methods able to embed domain knowledge into image/video classifiers as well as the availability of large, annotated datasets.
Collapse
Affiliation(s)
- Francesco Renna
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal; (M.M.); (A.N.); (A.C.); (M.C.)
- Faculdade de Ciências, Universidade do Porto, 4169-007 Porto, Portugal
| | - Miguel Martins
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal; (M.M.); (A.N.); (A.C.); (M.C.)
- Faculdade de Ciências, Universidade do Porto, 4169-007 Porto, Portugal
| | - Alexandre Neto
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal; (M.M.); (A.N.); (A.C.); (M.C.)
- Escola de Ciências e Tecnologia, Universidade de Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal
| | - António Cunha
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal; (M.M.); (A.N.); (A.C.); (M.C.)
- Escola de Ciências e Tecnologia, Universidade de Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal
| | - Diogo Libânio
- Departamento de Ciências da Informação e da Decisão em Saúde/Centro de Investigação em Tecnologias e Serviços de Saúde (CIDES/CINTESIS), Faculdade de Medicina, Universidade do Porto, 4200-319 Porto, Portugal; (D.L.); (M.D.-R.)
| | - Mário Dinis-Ribeiro
- Departamento de Ciências da Informação e da Decisão em Saúde/Centro de Investigação em Tecnologias e Serviços de Saúde (CIDES/CINTESIS), Faculdade de Medicina, Universidade do Porto, 4200-319 Porto, Portugal; (D.L.); (M.D.-R.)
| | - Miguel Coimbra
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal; (M.M.); (A.N.); (A.C.); (M.C.)
- Faculdade de Ciências, Universidade do Porto, 4169-007 Porto, Portugal
| |
Collapse
|