1
|
Li J, Jiang P, An Q, Wang GG, Kong HF. Medical image identification methods: A review. Comput Biol Med 2024; 169:107777. [PMID: 38104516 DOI: 10.1016/j.compbiomed.2023.107777] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/30/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
The identification of medical images is an essential task in computer-aided diagnosis, medical image retrieval and mining. Medical image data mainly include electronic health record data and gene information data, etc. Although intelligent imaging provided a good scheme for medical image analysis over traditional methods that rely on the handcrafted features, it remains challenging due to the diversity of imaging modalities and clinical pathologies. Many medical image identification methods provide a good scheme for medical image analysis. The concepts pertinent of methods, such as the machine learning, deep learning, convolutional neural networks, transfer learning, and other image processing technologies for medical image are analyzed and summarized in this paper. We reviewed these recent studies to provide a comprehensive overview of applying these methods in various medical image analysis tasks, such as object detection, image classification, image registration, segmentation, and other tasks. Especially, we emphasized the latest progress and contributions of different methods in medical image analysis, which are summarized base on different application scenarios, including classification, segmentation, detection, and image registration. In addition, the applications of different methods are summarized in different application area, such as pulmonary, brain, digital pathology, brain, skin, lung, renal, breast, neuromyelitis, vertebrae, and musculoskeletal, etc. Critical discussion of open challenges and directions for future research are finally summarized. Especially, excellent algorithms in computer vision, natural language processing, and unmanned driving will be applied to medical image recognition in the future.
Collapse
Affiliation(s)
- Juan Li
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China; School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Pan Jiang
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China
| | - Qing An
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China
| | - Gai-Ge Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, 266100, China.
| | - Hua-Feng Kong
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China.
| |
Collapse
|
2
|
Wang L, Yang Y, Yang A, Li T. Lightweight deep learning model incorporating an attention mechanism and feature fusion for automatic classification of gastric lesions in gastroscopic images. BIOMEDICAL OPTICS EXPRESS 2023; 14:4677-4695. [PMID: 37791283 PMCID: PMC10545198 DOI: 10.1364/boe.487456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 06/11/2023] [Accepted: 06/29/2023] [Indexed: 10/05/2023]
Abstract
Accurate diagnosis of various lesions in the formation stage of gastric cancer is an important problem for doctors. Automatic diagnosis tools based on deep learning can help doctors improve the accuracy of gastric lesion diagnosis. Most of the existing deep learning-based methods have been used to detect a limited number of lesions in the formation stage of gastric cancer, and the classification accuracy needs to be improved. To this end, this study proposed an attention mechanism feature fusion deep learning model with only 14 million (M) parameters. Based on that model, the automatic classification of a wide range of lesions covering the stage of gastric cancer formation was investigated, including non-neoplasm(including gastritis and intestinal metaplasia), low-grade intraepithelial neoplasia, and early gastric cancer (including high-grade intraepithelial neoplasia and early gastric cancer). 4455 magnification endoscopy with narrow-band imaging(ME-NBI) images from 1188 patients were collected to train and test the proposed method. The results of the test dataset showed that compared with the advanced gastric lesions classification method with the best performance (overall accuracy = 94.3%, parameters = 23.9 M), the proposed method achieved both higher overall accuracy and a relatively lightweight model (overall accuracy =95.6%, parameter = 14 M). The accuracy, sensitivity, and specificity of low-grade intraepithelial neoplasia were 94.5%, 93.0%, and 96.5%, respectively, achieving state-of-the-art classification performance. In conclusion, our method has demonstrated its potential in diagnosing various lesions at the stage of gastric cancer formation.
Collapse
Affiliation(s)
- Lingxiao Wang
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences & Peking Union Medical College, Tianjin 300192, China
| | - Yingyun Yang
- Department of Gastroenterology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Aiming Yang
- Department of Gastroenterology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Ting Li
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences & Peking Union Medical College, Tianjin 300192, China
| |
Collapse
|
3
|
Lee J, Lee H, Chung JW. The Role of Artificial Intelligence in Gastric Cancer: Surgical and Therapeutic Perspectives: A Comprehensive Review. J Gastric Cancer 2023; 23:375-387. [PMID: 37553126 PMCID: PMC10412973 DOI: 10.5230/jgc.2023.23.e31] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 07/31/2023] [Accepted: 07/31/2023] [Indexed: 08/10/2023] Open
Abstract
Stomach cancer has a high annual mortality rate worldwide necessitating early detection and accurate treatment. Even experienced specialists can make erroneous judgments based on several factors. Artificial intelligence (AI) technologies are being developed rapidly to assist in this field. Here, we aimed to determine how AI technology is used in gastric cancer diagnosis and analyze how it helps patients and surgeons. Early detection and correct treatment of early gastric cancer (EGC) can greatly increase survival rates. To determine this, it is important to accurately determine the diagnosis and depth of the lesion and the presence or absence of metastasis to the lymph nodes, and suggest an appropriate treatment method. The deep learning algorithm, which has learned gastric lesion endoscopyimages, morphological characteristics, and patient clinical information, detects gastric lesions with high accuracy, sensitivity, and specificity, and predicts morphological characteristics. Through this, AI assists the judgment of specialists to help select the correct treatment method among endoscopic procedures and radical resections and helps to predict the resection margins of lesions. Additionally, AI technology has increased the diagnostic rate of both relatively inexperienced and skilled endoscopic diagnosticians. However, there were limitations in the data used for learning, such as the amount of quantitatively insufficient data, retrospective study design, single-center design, and cases of non-various lesions. Nevertheless, this assisted endoscopic diagnosis technology that incorporates deep learning technology is sufficiently practical and future-oriented and can play an important role in suggesting accurate treatment plans to surgeons for resection of lesions in the treatment of EGC.
Collapse
Affiliation(s)
- JunHo Lee
- Division of Gastroenterology, Department of Internal Medicine, Gachon University Gil Medical Center, Incheon, Korea
- Corp. CAIMI, Incheon, Korea
| | - Hanna Lee
- Division of Gastroenterology, Department of Internal Medicine, Gachon University Gil Medical Center, Incheon, Korea
| | - Jun-Won Chung
- Division of Gastroenterology, Department of Internal Medicine, Gachon University Gil Medical Center, Incheon, Korea
- Corp. CAIMI, Incheon, Korea.
| |
Collapse
|
4
|
Hoffmann C, Kobetic M, Alford N, Blencowe N, Ramirez J, Macefield R, Blazeby JM, Avery KNL, Potter S. Shared Learning Utilizing Digital Methods in Surgery to Enhance Transparency in Surgical Innovation: Protocol for a Scoping Review. JMIR Res Protoc 2022; 11:e37544. [PMID: 36074555 PMCID: PMC9501681 DOI: 10.2196/37544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 08/11/2022] [Accepted: 08/16/2022] [Indexed: 11/20/2022] Open
Abstract
BACKGROUND Surgical innovation can lead to important improvements in patient outcomes. Currently, information and knowledge about novel procedures and devices are disseminated informally and in an unstandardized way (eg, through social media). This can lead to ineffective and inefficient knowledge sharing among surgeons, which can result in the harmful repetition of mistakes and delay in the uptake of promising innovation. Improvements are needed in the way that learning in surgical innovation is shared through the development of novel, real-time methods, informed by a contemporary and comprehensive investigation of existing methods. OBJECTIVE The aim of this scoping review is to explore the application of existing digital methods for training/education and feedback to surgeons in the context of performing invasive surgical procedures. This work will (1) summarize existing methods for shared learning in surgery and how they are characterized and operationalized, (2) examine the impact of their application, and (3) explore their benefits and barriers to implementation. The findings of this scoping review will inform the development of novel, real-time methods to optimize shared learning in surgical innovation. METHODS This study will adhere to the recommended guidelines for conducting scoping reviews. A total of 6 different searches will be conducted within multiple sources (2 electronic databases, journals, social media, gray literature, commercial websites, and snowball searches) to comprehensively identify relevant articles and data. Searches will be limited to articles published in the English language within the last 5 years. Wherever possible, a 2-stage study selection process will be followed whereby the eligibility of articles will be assessed through the title, abstract, and full-text screening independently by 2 reviewers. Inclusion criteria will be articles providing data on (1) fully qualified theater staff involved in performing invasive procedures, (2) one or more methods for shared learning (ie, digital means for training/education and feedback), and (3) qualitative or quantitative evaluations of this method. Data will be extracted (10% double data extraction by an independent reviewer) into a piloted proforma and analyzed using descriptive statistics, narrative summaries, and principles of thematic analysis. RESULTS The study commenced in October 2021 and is planned to be completed in 2023. To date, systematic searches were applied to 2 electronic databases (MEDLINE and Web of Science) and returned a total of 10,093 records. The results of this scoping review will be published as open access in a peer-reviewed journal. CONCLUSIONS This scoping review of methods for shared learning in surgery is, to our knowledge, the most comprehensive and up-to-date investigation that maps current information on this topic. Ultimately, efficient and effective sharing of information and knowledge of novel procedures and devices has the potential to optimize the evaluation of early-phase surgical research and reduce harmful innovation. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) PRR1-10.2196/37544.
Collapse
Affiliation(s)
- Christin Hoffmann
- National Institute for Health and Care Research, Bristol Biomedical Research Centre, Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
| | - Matthew Kobetic
- National Institute for Health and Care Research, Bristol Biomedical Research Centre, Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
| | - Natasha Alford
- National Institute for Health and Care Research, Bristol Biomedical Research Centre, Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
| | - Natalie Blencowe
- National Institute for Health and Care Research, Bristol Biomedical Research Centre, Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
- Division of Surgery, Bristol Royal Infirmary, University Hospitals Bristol and Weston National Health Service Foundation Trust, Bristol, United Kingdom
| | - Jozel Ramirez
- National Institute for Health and Care Research, Bristol Biomedical Research Centre, Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
| | - Rhiannon Macefield
- National Institute for Health and Care Research, Bristol Biomedical Research Centre, Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
| | - Jane M Blazeby
- National Institute for Health and Care Research, Bristol Biomedical Research Centre, Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
- Division of Surgery, Bristol Royal Infirmary, University Hospitals Bristol and Weston National Health Service Foundation Trust, Bristol, United Kingdom
| | - Kerry N L Avery
- National Institute for Health and Care Research, Bristol Biomedical Research Centre, Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
| | - Shelley Potter
- National Institute for Health and Care Research, Bristol Biomedical Research Centre, Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
- Bristol Breast Care Centre, North Bristol National Health Service Trust, Bristol, United Kingdom
- Translational Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
| |
Collapse
|
5
|
Zhao Y, Hu B, Wang Y, Yin X, Jiang Y, Zhu X. Identification of gastric cancer with convolutional neural networks: a systematic review. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:11717-11736. [PMID: 35221775 PMCID: PMC8856868 DOI: 10.1007/s11042-022-12258-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 06/20/2021] [Accepted: 01/14/2022] [Indexed: 06/14/2023]
Abstract
The identification of diseases is inseparable from artificial intelligence. As an important branch of artificial intelligence, convolutional neural networks play an important role in the identification of gastric cancer. We conducted a systematic review to summarize the current applications of convolutional neural networks in the gastric cancer identification. The original articles published in Embase, Cochrane Library, PubMed and Web of Science database were systematically retrieved according to relevant keywords. Data were extracted from published papers. A total of 27 articles were retrieved for the identification of gastric cancer using medical images. Among them, 19 articles were applied in endoscopic images and 8 articles were applied in pathological images. 16 studies explored the performance of gastric cancer detection, 7 studies explored the performance of gastric cancer classification, 2 studies reported the performance of gastric cancer segmentation and 2 studies analyzed the performance of gastric cancer delineating margins. The convolutional neural network structures involved in the research included AlexNet, ResNet, VGG, Inception, DenseNet and Deeplab, etc. The accuracy of studies was 77.3 - 98.7%. Good performances of the systems based on convolutional neural networks have been showed in the identification of gastric cancer. Artificial intelligence is expected to provide more accurate information and efficient judgments for doctors to diagnose diseases in clinical work.
Collapse
Affiliation(s)
- Yuxue Zhao
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| | - Bo Hu
- Department of Thoracic Surgery, Qingdao Municipal Hospital, Qingdao, China
| | - Ying Wang
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| | - Xiaomeng Yin
- Pediatrics Intensive Care Unit, Qingdao Municipal Hospital, Qingdao, China
| | - Yuanyuan Jiang
- International Medical Services, Qilu Hospital of Shandong University, Jinan, China
| | - Xiuli Zhu
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| |
Collapse
|
6
|
Zhuang H, Bao A, Tan Y, Wang H, Xie Q, Qiu M, Xiong W, Liao F. Application and prospect of artificial intelligence in digestive endoscopy. Expert Rev Gastroenterol Hepatol 2022; 16:21-31. [PMID: 34937459 DOI: 10.1080/17474124.2022.2020646] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
INTRODUCTION With the progress of science and technology, artificial intelligence represented by deep learning has gradually begun to be applied in the medical field. Artificial intelligence has been applied to benign gastrointestinal lesions, tumors, early cancer, inflammatory bowel disease, gallbladder, pancreas, and other diseases. This review summarizes the latest research results on artificial intelligence in digestive endoscopy and discusses the prospect of artificial intelligence in digestive system diseases. AREAS COVERED We retrieved relevant documents on artificial intelligence in digestive tract diseases from PubMed and Medline. This review elaborates on the knowledge of computer-aided diagnosis in digestive endoscopy. EXPERT OPINION Artificial intelligence significantly improves diagnostic accuracy, reduces physicians' workload, and provides a shred of evidence for clinical diagnosis and treatment. Shortly, artificial intelligence will have high application value in the field of medicine.
Collapse
Affiliation(s)
- Huangming Zhuang
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Anyu Bao
- Clinical Laboratory, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Yulin Tan
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Hanyu Wang
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Qingfang Xie
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Meiqi Qiu
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Wanli Xiong
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Fei Liao
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| |
Collapse
|
7
|
Al Mouiee D, Meijering E, Kalloniatis M, Nivison-Smith L, Williams RA, Nayagam DAX, Spencer TC, Luu CD, McGowan C, Epp SB, Shivdasani MN. Classifying Retinal Degeneration in Histological Sections Using Deep Learning. Transl Vis Sci Technol 2021; 10:9. [PMID: 34110385 PMCID: PMC8196406 DOI: 10.1167/tvst.10.7.9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Purpose Artificial intelligence (AI) techniques are increasingly being used to classify retinal diseases. In this study we investigated the ability of a convolutional neural network (CNN) in categorizing histological images into different classes of retinal degeneration. Methods Images were obtained from a chemically induced feline model of monocular retinal dystrophy and split into training and testing sets. The training set was graded for the level of retinal degeneration and used to train various CNN architectures. The testing set was evaluated through the best architecture and graded by six observers. Comparisons between model and observer classifications, and interobserver variability were measured. Finally, the effects of using less training images or images containing half the presentable context were investigated. Results The best model gave weighted-F1 scores in the range 85% to 90%. Cohen kappa scores reached up to 0.86, indicating high agreement between the model and observers. Interobserver variability was consistent with the model-observer variability in the model's ability to match predictions with the observers. Image context restriction resulted in model performance reduction by up to 6% and at least one training set size resulted in a model performance reduction of 10% compared to the original size. Conclusions Detecting the presence and severity of up to three classes of retinal degeneration in histological data can be reliably achieved with a deep learning classifier. Translational Relevance This work lays the foundations for future AI models which could aid in the evaluation of more intricate changes occurring in retinal degeneration, particularly in other types of clinically derived image data.
Collapse
Affiliation(s)
- Daniel Al Mouiee
- Graduate School of Biomedical Engineering, University of New South Wales, Kensington, NSW, Australia.,School of Computer Science and Engineering, University of New South Wales, Kensington, NSW, Australia.,School of Biotechnology and Biomolecular Science, University of New South Wales, Kensington, NSW, Australia
| | - Erik Meijering
- Graduate School of Biomedical Engineering, University of New South Wales, Kensington, NSW, Australia.,School of Computer Science and Engineering, University of New South Wales, Kensington, NSW, Australia
| | - Michael Kalloniatis
- School of Optometry and Vision Sciences, University of New South Wales, Kensington, NSW, Australia
| | - Lisa Nivison-Smith
- School of Optometry and Vision Sciences, University of New South Wales, Kensington, NSW, Australia
| | - Richard A Williams
- Department of Pathology, University of Melbourne, Parkville, VIC, Australia
| | - David A X Nayagam
- Department of Pathology, University of Melbourne, Parkville, VIC, Australia.,The Bionics Institute of Australia, East Melbourne, VIC, Australia
| | - Thomas C Spencer
- The Bionics Institute of Australia, East Melbourne, VIC, Australia.,Department of Biomedical Engineering, University of Melbourne, Parkville, VIC, Australia
| | - Chi D Luu
- Ophthalmology, Department of Surgery, University of Melbourne, Parkville, VIC, Australia.,Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, East Melbourne, VIC, Australia
| | - Ceara McGowan
- The Bionics Institute of Australia, East Melbourne, VIC, Australia
| | - Stephanie B Epp
- The Bionics Institute of Australia, East Melbourne, VIC, Australia
| | - Mohit N Shivdasani
- Graduate School of Biomedical Engineering, University of New South Wales, Kensington, NSW, Australia.,The Bionics Institute of Australia, East Melbourne, VIC, Australia
| |
Collapse
|
8
|
Yan T, Wong PK, Qin YY. Deep learning for diagnosis of precancerous lesions in upper gastrointestinal endoscopy: A review. World J Gastroenterol 2021; 27:2531-2544. [PMID: 34092974 PMCID: PMC8160615 DOI: 10.3748/wjg.v27.i20.2531] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Revised: 03/27/2021] [Accepted: 04/09/2021] [Indexed: 02/06/2023] Open
Abstract
Upper gastrointestinal (GI) cancers are the leading cause of cancer-related deaths worldwide. Early identification of precancerous lesions has been shown to minimize the incidence of GI cancers and substantiate the vital role of screening endoscopy. However, unlike GI cancers, precancerous lesions in the upper GI tract can be subtle and difficult to detect. Artificial intelligence techniques, especially deep learning algorithms with convolutional neural networks, might help endoscopists identify the precancerous lesions and reduce interobserver variability. In this review, a systematic literature search was undertaken of the Web of Science, PubMed, Cochrane Library and Embase, with an emphasis on the deep learning-based diagnosis of precancerous lesions in the upper GI tract. The status of deep learning algorithms in upper GI precancerous lesions has been systematically summarized. The challenges and recommendations targeting this field are comprehensively analyzed for future research.
Collapse
Affiliation(s)
- Tao Yan
- School of Mechanical Engineering, Hubei University of Arts and Science, Xiangyang 441053, Hubei Province, China
- Department of Electromechanical Engineering, University of Macau, Taipa 999078, Macau, China
| | - Pak Kin Wong
- Department of Electromechanical Engineering, University of Macau, Taipa 999078, Macau, China
| | - Ye-Ying Qin
- Department of Electromechanical Engineering, University of Macau, Taipa 999078, Macau, China
| |
Collapse
|
9
|
Lui TKL, Tsui VWM, Leung WK. Accuracy of artificial intelligence-assisted detection of upper GI lesions: a systematic review and meta-analysis. Gastrointest Endosc 2020; 92:821-830.e9. [PMID: 32562608 DOI: 10.1016/j.gie.2020.06.034] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Accepted: 06/07/2020] [Indexed: 12/11/2022]
Abstract
BACKGROUND AND AIMS Artificial intelligence (AI)-assisted detection is increasingly used in upper endoscopy. We performed a meta-analysis to determine the diagnostic accuracy of AI on detection of gastric and esophageal neoplastic lesions and Helicobacter pylori (HP) status. METHODS We searched Embase, PubMed, Medline, Web of Science, and Cochrane databases for studies on AI detection of gastric or esophageal neoplastic lesions and HP status. After assessing study quality using the Quality Assessment of Diagnostic Accuracy Studies tool, a bivariate meta-analysis following a random-effects model was used to summarize the data and plot hierarchical summary receiver-operating characteristic curves. The diagnostic accuracy was determined by the area under the hierarchical summary receiver-operating characteristic curve (AUC). RESULTS Twenty-three studies including 969,318 images were included. The AUC of AI detection of neoplastic lesions in the stomach, Barrett's esophagus, and squamous esophagus and HP status were .96 (95% confidence interval [CI], .94-.99), .96 (95% CI, .93-.99), .88 (95% CI, .82-.96), and .92 (95% CI, .88-.97), respectively. AI using narrow-band imaging was superior to white-light imaging on detection of neoplastic lesions in squamous esophagus (.92 vs .83, P < .001). The performance of AI was superior to endoscopists in the detection of neoplastic lesions in the stomach (AUC, .98 vs .87; P < .001), Barrett's esophagus (AUC, .96 vs .82; P < .001), and HP status (AUC, .90 vs .82; P < .001). CONCLUSIONS AI is accurate in the detection of upper GI neoplastic lesions and HP infection status. However, most studies were based on retrospective reviews of selected images, which requires further validation in prospective trials.
Collapse
Affiliation(s)
- Thomas K L Lui
- Department of Medicine, Queen Mary Hospital, University of Hong Kong, Hong Kong
| | - Vivien W M Tsui
- Department of Medicine, Queen Mary Hospital, University of Hong Kong, Hong Kong
| | - Wai K Leung
- Department of Medicine, Queen Mary Hospital, University of Hong Kong, Hong Kong
| |
Collapse
|