1
|
Li S, Li Y, Zhou C, Li H, Zhao Y, Yi X, Chen C, Peng C, Wang T, Liu F, Xiao J, Shi L. Muscle fat content correlates with postoperative survival of viral-related cirrhosis patients after the TIPS: a retrospective study. Ann Med 2025; 57:2484460. [PMID: 40146662 PMCID: PMC11951314 DOI: 10.1080/07853890.2025.2484460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Revised: 02/09/2025] [Accepted: 03/18/2025] [Indexed: 03/29/2025] Open
Abstract
PURPOSE Early prediction of the prognosis of viral-related cirrhosis patients after transjugular intrahepatic portosystemic shunt (TIPS) is beneficial for clinical decision-making. The aim of this study is to explore a comprehensive prognostic assessment model for evaluating the survival outcomes of patients post-TIPS. MATERIALS AND METHODS A total of 155 patients treated with TIPS were included in the study. The data were collected from electronic records. The nutritional status of the patient is evaluated using imaging examinations measuring by the axial CT images from the L3 vertebral level. The primary endpoint was set as death within 1 year after TIPS. Multivariate Cox regression was performed to determine the factors associated with mortality. RESULTS The Cox regression analysis revealed that the lower PMFI was associated with a lower risk of all-cause mortality after TIPS (hazard ratio [HR] 1.159, 95% confidence interval [CI] 1.063-1.263, p = 0.001). Furthermore, subgroup analyses according to gender revealed the PMFI was associated with postoperative death both in male (HR 2.125, 95% CI, 1.147-3.936, p = 0.017) and female patients (HR 1.070, 95% CI, 1.001-1.144, p = 0.047). The area under the curve (AUC) for predicting death within 1 year was 0.807. The clinical impact curve analysis showed that PMFI had higher levels of risk threshold probability and a smaller gap between actual and predicted curves. CONCLUSIONS In viral-related cirrhosis patients with portal hypertension, increased muscle fat content might be a potential prognostic marker and associated with postoperative death after TIPS.
Collapse
Affiliation(s)
- Sai Li
- Interventional Radiology Center, Department of Radiology, The Third Xiangya Hospital of Central South Hospital, Changsha, Hunan, China
- Interventional Radiology Center, Department of Radiology, Xiangya Hospital Central South University, Changsha, Hunan, China
| | - Yong Li
- Department of Gastroenterology, Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Chunhui Zhou
- Interventional Radiology Center, Department of Radiology, Xiangya Hospital Central South University, Changsha, Hunan, China
| | - Haiping Li
- Interventional Radiology Center, Department of Radiology, Xiangya Hospital Central South University, Changsha, Hunan, China
| | - Yazhuo Zhao
- Interventional Radiology Center, Department of Radiology, Xiangya Hospital Central South University, Changsha, Hunan, China
| | - Xiaoping Yi
- Interventional Radiology Center, Department of Radiology, Xiangya Hospital Central South University, Changsha, Hunan, China
| | - Changyong Chen
- Interventional Radiology Center, Department of Radiology, Xiangya Hospital Central South University, Changsha, Hunan, China
| | - Changli Peng
- Interventional Radiology Center, Department of Radiology, Xiangya Hospital Central South University, Changsha, Hunan, China
| | - Tianming Wang
- Interventional Radiology Center, Department of Radiology, Xiangya Hospital Central South University, Changsha, Hunan, China
| | - Fei Liu
- Interventional Radiology Center, Department of Radiology, Xiangya Hospital Central South University, Changsha, Hunan, China
| | - Juxiong Xiao
- Interventional Radiology Center, Department of Radiology, Xiangya Hospital Central South University, Changsha, Hunan, China
| | - Liangrong Shi
- Interventional Radiology Center, Department of Radiology, Xiangya Hospital Central South University, Changsha, Hunan, China
- Research Center for Geriatric Disorder, Xiangya Hospital Central South, Changsha, Hunan, China
| |
Collapse
|
2
|
Sadeghi V, Mehridehnavi A, Behdad M, Vard A, Omrani M, Sharifi M, Sanahmadi Y, Teyfouri N. Multivariate Gaussian Bayes classifier with limited data for segmentation of clean and contaminated regions in the small bowel capsule endoscopy images. PLoS One 2025; 20:e0315638. [PMID: 40053533 PMCID: PMC11888149 DOI: 10.1371/journal.pone.0315638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Accepted: 11/28/2024] [Indexed: 03/09/2025] Open
Abstract
A considerable amount of undesirable factors in the wireless capsule endoscopy (WCE) procedure hinder the proper visualization of the small bowel and take gastroenterologists more time to review. Objective quantitative assessment of different bowel preparation paradigms and saving the physician reviewing time motivated us to present an automatic low-cost statistical model for automatically segmenting of clean and contaminated regions in the WCE images. In the model construction phase, only 20 manually pixel-labeled images have been used from the normal and reduced mucosal view classes of the Kvasir capsule endoscopy dataset. In addition to calculating prior probability, two different probabilistic tri-variate Gaussian distribution models (GDMs) with unique mean vectors and covariance matrices have been fitted to the concatenated RGB color pixel intensity values of clean and contaminated regions separately. Applying the Bayes rule, the membership probability of every pixel of the input test image to each of the two classes is evaluated. The robustness has been evaluated using 5 trials; in each round, from the total number of 2000 randomly selected images, 20 and 1980 images have been used for model construction and evaluation modes, respectively. Our experimental results indicate that accuracy, precision, specificity, sensitivity, area under the receiver operating characteristic curve (AUROC), dice similarity coefficient (DSC), and intersection over union (IOU) are 0.89 ± 0.07, 0.91 ± 0.07, 0.73 ± 0.20, 0.90 ± 0.12, 0.92 ± 0.06, 0.92 ± 0.05 and 0.86 ± 0.09, respectively. The presented scheme is easy to deploy for objectively assessing small bowel cleansing score, comparing different bowel preparation paradigms, and decreasing the inspection time. The results from the SEE-AI project dataset and CECleanliness database proved that the proposed scheme has good adaptability.
Collapse
Affiliation(s)
- Vahid Sadeghi
- Department of Bioelectrics and Biomedical Engineering, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Alireza Mehridehnavi
- Department of Bioelectrics and Biomedical Engineering, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Maryam Behdad
- Department of Electrical Engineering, Yazd University, Yazd, Iran
| | - Alireza Vard
- Department of Bioelectrics and Biomedical Engineering, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Mina Omrani
- Department of Mathematics and Computer Science, Amirkabir University of Technology, Tehran, Iran
| | - Mohsen Sharifi
- Gastroenterologist and Hepatologist Fellowship of Endosonography, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Yasaman Sanahmadi
- Department of Bioelectrics and Biomedical Engineering, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Niloufar Teyfouri
- Cancer Prevention Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
- Omid Hospital, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|
3
|
Wang YP, Jheng YC, Hou MC, Lu CL. The optimal labelling method for artificial intelligence-assisted polyp detection in colonoscopy. J Formos Med Assoc 2024:S0929-6646(24)00582-5. [PMID: 39730273 DOI: 10.1016/j.jfma.2024.12.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 12/12/2024] [Accepted: 12/16/2024] [Indexed: 12/29/2024] Open
Abstract
BACKGROUND The methodology in colon polyp labeling in establishing database for ma-chine learning is not well-described and standardized. We aimed to find out the best annotation method to generate the most accurate model in polyp detection. METHODS 3542 colonoscopy polyp images were obtained from endoscopy database of a tertiary medical center. Two experienced endoscopists manually annotated the polyp with (1) exact outline segmentation and (2) using a standard rectangle box close to the polyp margin, and extending 10%, 20%, 30%, 40% and 50% longer in both width and length of the standard rectangle for AI modeling setup. The images were randomly divided into training and validation sets in 4:1 ratio. U-Net convolutional network architecture was used to develop automatic segmentation machine learning model. Another unrelated verification set was established to evaluate the performance of polyp detection by different segmentation methods. RESULTS Extending the bounding box to 20% of the polyp margin represented the best performance in accuracy (95.42%), sensitivity (94.84%) and F1-score (95.41%). Exact outline segmentation model showed the excellent performance in sensitivity (99.6%) and the worst precision (77.47%). The 20% model was the best among the 6 models. (confidence interval = 0.957-0.985; AUC = 0.971). CONCLUSIONS Labelling methodology affect the predictability of AI model in polyp detection. Extending the bounding box to 20% of the polyp margin would result in the best polyp detection predictive model based on AUC data. It is mandatory to establish a standardized way in colon polyp labeling for comparison of the precision of different AI models.
Collapse
Affiliation(s)
- Yen-Po Wang
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taiwan; Division of Gastroenterology, Taipei Veterans General Hospital, Taiwan; Institute of Brain Science, National Yang Ming Chiao Tung University School of Medicine, Taiwan; Faculty of Medicine, National Yang Ming Chiao Tung University School of Medicine, Taiwan
| | - Ying-Chun Jheng
- Department of Medical Research, Taipei Veterans General Hospital, Taiwan; Big Data Center, Taipei Veterans General Hospital, Taiwan
| | - Ming-Chih Hou
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taiwan; Division of Gastroenterology, Taipei Veterans General Hospital, Taiwan; Faculty of Medicine, National Yang Ming Chiao Tung University School of Medicine, Taiwan
| | - Ching-Liang Lu
- Endoscopy Center for Diagnosis and Treatment, Taipei Veterans General Hospital, Taiwan; Division of Gastroenterology, Taipei Veterans General Hospital, Taiwan; Institute of Brain Science, National Yang Ming Chiao Tung University School of Medicine, Taiwan.
| |
Collapse
|
4
|
Chen J, Wang G, Zhou J, Zhang Z, Ding Y, Xia K, Xu X. AI support for colonoscopy quality control using CNN and transformer architectures. BMC Gastroenterol 2024; 24:257. [PMID: 39123140 PMCID: PMC11316311 DOI: 10.1186/s12876-024-03354-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 08/06/2024] [Indexed: 08/12/2024] Open
Abstract
BACKGROUND Construct deep learning models for colonoscopy quality control using different architectures and explore their decision-making mechanisms. METHODS A total of 4,189 colonoscopy images were collected from two medical centers, covering different levels of bowel cleanliness, the presence of polyps, and the cecum. Using these data, eight pre-trained models based on CNN and Transformer architectures underwent transfer learning and fine-tuning. The models' performance was evaluated using metrics such as AUC, Precision, and F1 score. Perceptual hash functions were employed to detect image changes, enabling real-time monitoring of colonoscopy withdrawal speed. Model interpretability was analyzed using techniques such as Grad-CAM and SHAP. Finally, the best-performing model was converted to ONNX format and deployed on device terminals. RESULTS The EfficientNetB2 model outperformed other architectures on the validation set, achieving an accuracy of 0.992. It surpassed models based on other CNN and Transformer architectures. The model's precision, recall, and F1 score were 0.991, 0.989, and 0.990, respectively. On the test set, the EfficientNetB2 model achieved an average AUC of 0.996, with a precision of 0.948 and a recall of 0.952. Interpretability analysis showed the specific image regions the model used for decision-making. The model was converted to ONNX format and deployed on device terminals, achieving an average inference speed of over 60 frames per second. CONCLUSIONS The AI-assisted quality system, based on the EfficientNetB2 model, integrates four key quality control indicators for colonoscopy. This integration enables medical institutions to comprehensively manage and enhance these indicators using a single model, showcasing promising potential for clinical applications.
Collapse
Affiliation(s)
- Jian Chen
- Department of Gastroenterology, Changshu Hospital Affiliated to Soochow University, Suzhou, 215500, China
| | - Ganhong Wang
- Department of Gastroenterology, Changshu Traditional Chinese Medicine Hospital (New District Hospital), Suzhou, 215500, China
| | - Jingjie Zhou
- Department of Gastroenterology, Changshu Hospital Affiliated to Soochow University, Suzhou, 215500, China
| | - Zihao Zhang
- Shanghai Haoxiong Education Technology Co., Ltd, Shanghai, 200434, China
| | - Yu Ding
- Department of Gastroenterology, Changshu Hospital Affiliated to Soochow University, Suzhou, 215500, China
| | - Kaijian Xia
- Department of Information Engineering, Changshu Hospital Affiliated to Soochow University, Suzhou, 215500, China.
| | - Xiaodan Xu
- Department of Gastroenterology, Changshu Hospital Affiliated to Soochow University, Suzhou, 215500, China.
| |
Collapse
|
5
|
Kim ES, Lee KS. Artificial intelligence in colonoscopy: from detection to diagnosis. Korean J Intern Med 2024; 39:555-562. [PMID: 38695105 PMCID: PMC11236815 DOI: 10.3904/kjim.2023.332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 10/30/2023] [Accepted: 11/13/2023] [Indexed: 07/12/2024] Open
Abstract
This study reviews the recent progress of artificial intelligence for colonoscopy from detection to diagnosis. The source of data was 27 original studies in PubMed. The search terms were "colonoscopy" (title) and "deep learning" (abstract). The eligibility criteria were: (1) the dependent variable of gastrointestinal disease; (2) the interventions of deep learning for classification, detection and/or segmentation for colonoscopy; (3) the outcomes of accuracy, sensitivity, specificity, area under the curve (AUC), precision, F1, intersection of union (IOU), Dice and/or inference frames per second (FPS); (3) the publication year of 2021 or later; (4) the publication language of English. Based on the results of this study, different deep learning methods would be appropriate for different tasks for colonoscopy, e.g., Efficientnet with neural architecture search (AUC 99.8%) in the case of classification, You Only Look Once with the instance tracking head (F1 96.3%) in the case of detection, and Unet with dense-dilation-residual blocks (Dice 97.3%) in the case of segmentation. Their performance measures reported varied within 74.0-95.0% for accuracy, 60.0-93.0% for sensitivity, 60.0-100.0% for specificity, 71.0-99.8% for the AUC, 70.1-93.3% for precision, 81.0-96.3% for F1, 57.2-89.5% for the IOU, 75.1-97.3% for Dice and 66-182 for FPS. In conclusion, artificial intelligence provides an effective, non-invasive decision support system for colonoscopy from detection to diagnosis.
Collapse
Affiliation(s)
- Eun Sun Kim
- Department of Gastroenterology, Korea University Anam Hospital, Seoul, Korea
| | - Kwang-Sig Lee
- AI Center, Korea University Anam Hospital, Seoul, Korea
| |
Collapse
|
6
|
Wang G, Ren T. Design of sports achievement prediction system based on U-net convolutional neural network in the context of machine learning. Heliyon 2024; 10:e30055. [PMID: 38778994 PMCID: PMC11109724 DOI: 10.1016/j.heliyon.2024.e30055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/25/2024] Open
Abstract
Sports plays a pivotal role in national development. To accurately predict college students' sports performance and motivate them to improve their physical fitness, this study constructs a sports achievement prediction system by using a U-Net Convolutional Neural Network (CNN) in machine learning. Firstly, the current state of physical education teachers' instructional proficiency is investigated and analyzed to identify existing problems. Secondly, an improved U-Net-based sports achievement prediction system is proposed. This method enhances the utilization and propagation of network features by incorporating dense connections, thus addressing gradient disappearance issues. Simultaneously, an improved mixed loss function is introduced to alleviate class imbalance. Finally, the effectiveness of the proposed system is validated through testing, demonstrating that the improved U-Net CNN algorithm yields superior results. Specifically, the prediction accuracy of the improved network for sports performance surpasses that of the original U-Net by 4.22 % and exceeds that of DUNet by 5.22 %. Compared with other existing prediction networks, the improved U-Net CNN model exhibits a superior achievement prediction ability. Consequently, the proposed system enhances teaching and learning efficiency and offers insights into applying artificial intelligence technology to smart classroom development.
Collapse
Affiliation(s)
- Guoliang Wang
- College of Sport, Henan Polytechnic University, Jiaozuo, Henan, 454003, China
| | - Tianping Ren
- College of Sport, Henan Polytechnic University, Jiaozuo, Henan, 454003, China
| |
Collapse
|
7
|
Maida M, Marasco G, Facciorusso A, Shahini E, Sinagra E, Pallio S, Ramai D, Murino A. Effectiveness and application of artificial intelligence for endoscopic screening of colorectal cancer: the future is now. Expert Rev Anticancer Ther 2023; 23:719-729. [PMID: 37194308 DOI: 10.1080/14737140.2023.2215436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 05/15/2023] [Indexed: 05/18/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) in gastrointestinal endoscopy includes systems designed to interpret medical images and increase sensitivity during examination. This may be a promising solution to human biases and may provide support during diagnostic endoscopy. AREAS COVERED This review aims to summarize and evaluate data supporting AI technologies in lower endoscopy, addressing their effectiveness, limitations, and future perspectives. EXPERT OPINION Computer-aided detection (CADe) systems have been studied with promising results, allowing for an increase in adenoma detection rate (ADR), adenoma per colonoscopy (APC), and a reduction in adenoma miss rate (AMR). This may lead to an increase in the sensitivity of endoscopic examinations and a reduction in the risk of interval-colorectal cancer. In addition, computer-aided characterization (CADx) has also been implemented, aiming to distinguish adenomatous and non-adenomatous lesions through real-time assessment using advanced endoscopic imaging techniques. Moreover, computer-aided quality (CADq) systems have been developed with the aim of standardizing quality measures in colonoscopy (e.g. withdrawal time and adequacy of bowel cleansing) both to improve the quality of examinations and set a reference standard for randomized controlled trials.
Collapse
Affiliation(s)
- Marcello Maida
- Gastroenterology and Endoscopy Unit, S. Elia-Raimondi Hospital, Caltanissetta, Italy
| | - Giovanni Marasco
- IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
- Department of Medical and Surgical Sciences, University of Bologna, Bologna, Italy
| | - Antonio Facciorusso
- Department of Medical and Surgical Sciences, University of Foggia, Foggia, Italy
| | - Endrit Shahini
- Gastroenterology Unit, National Institute of Gastroenterology-IRCCS "Saverio de Bellis", Castellana Grotte, Bari, Italy
| | - Emanuele Sinagra
- Gastroenterology and Endoscopy Unit, Fondazione Istituto San Raffaele Giglio, Cefalu, Italy
| | - Socrate Pallio
- Digestive Diseases Endoscopy Unit, Policlinico G. Martino Hospital, University of Messina, Messina, Italy
| | - Daryl Ramai
- Gastroenterology & Hepatology, University of Utah Health, Salt Lake City, UT, USA
| | - Alberto Murino
- Royal Free Unit for Endoscopy, The Royal Free Hospital and University College London Institute for Liver and Digestive Health, Hampstead, London, UK
- Department of Gastroenterology, Cleveland Clinic London, London, UK
| |
Collapse
|
8
|
Galati JS, Duve RJ, O'Mara M, Gross SA. Artificial intelligence in gastroenterology: A narrative review. Artif Intell Gastroenterol 2022; 3:117-141. [DOI: 10.35712/aig.v3.i5.117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 11/21/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
Artificial intelligence (AI) is a complex concept, broadly defined in medicine as the development of computer systems to perform tasks that require human intelligence. It has the capacity to revolutionize medicine by increasing efficiency, expediting data and image analysis and identifying patterns, trends and associations in large datasets. Within gastroenterology, recent research efforts have focused on using AI in esophagogastroduodenoscopy, wireless capsule endoscopy (WCE) and colonoscopy to assist in diagnosis, disease monitoring, lesion detection and therapeutic intervention. The main objective of this narrative review is to provide a comprehensive overview of the research being performed within gastroenterology on AI in esophagogastroduodenoscopy, WCE and colonoscopy.
Collapse
Affiliation(s)
- Jonathan S Galati
- Department of Medicine, NYU Langone Health, New York, NY 10016, United States
| | - Robert J Duve
- Department of Internal Medicine, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, Buffalo, NY 14203, United States
| | - Matthew O'Mara
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| | - Seth A Gross
- Division of Gastroenterology, NYU Langone Health, New York, NY 10016, United States
| |
Collapse
|
9
|
An Intelligent Tongue Diagnosis System via Deep Learning on the Android Platform. Diagnostics (Basel) 2022; 12:diagnostics12102451. [PMID: 36292140 PMCID: PMC9600321 DOI: 10.3390/diagnostics12102451] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Revised: 09/22/2022] [Accepted: 09/29/2022] [Indexed: 11/17/2022] Open
Abstract
To quickly and accurately identify the pathological features of the tongue, we developed an intelligent tongue diagnosis system that uses deep learning on a mobile terminal. We also propose an efficient and accurate tongue image processing algorithm framework to infer the category of the tongue. First, a software system integrating registration, login, account management, tongue image recognition, and doctor-patient dialogue was developed based on the Android platform. Then, the deep learning models, based on the official benchmark models, were trained by using the tongue image datasets. The tongue diagnosis algorithm framework includes the YOLOv5s6, U-Net, and MobileNetV3 networks, which are employed for tongue recognition, tongue region segmentation, and tongue feature classification (tooth marks, spots, and fissures), respectively. The experimental results demonstrate that the performance of the tongue diagnosis model was satisfying, and the accuracy of the final classification of tooth marks, spots, and fissures was 93.33%, 89.60%, and 97.67%, respectively. The construction of this system has a certain reference value for the objectification and intelligence of tongue diagnosis.
Collapse
|
10
|
Objective Methods of 5-Aminolevulinic Acid-Based Endoscopic Photodynamic Diagnosis Using Artificial Intelligence for Identification of Gastric Tumors. J Clin Med 2022; 11:jcm11113030. [PMID: 35683417 PMCID: PMC9181250 DOI: 10.3390/jcm11113030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 05/20/2022] [Accepted: 05/25/2022] [Indexed: 11/16/2022] Open
Abstract
Positive diagnoses of gastric tumors from photodynamic diagnosis (PDD) images after the administration of 5-aminolevulinic acid are subjectively identified by expert endoscopists. Objective methods of tumor identification are needed to reduce potential misidentifications. We developed two methods to identify gastric tumors from PDD images. Method one was applied to segmented regions in the PDD endoscopic image to determine the region in LAB color space to be attributed to tumors using a multi-layer neural network. Method two aimed to diagnose tumors and determine regions in the PDD endoscopic image attributed to tumors using the convoluted neural network method. The efficiencies of diagnosing tumors were 77.8% (7/9) and 93.3% (14/15) for method one and method two, respectively. The efficiencies of determining tumor region defined as the ratio of the area were 35.7% (0.0–78.0) and 48.5% (3.0–89.1) for method one and method two, respectively. False-positive rates defined as the ratio of the area were 0.3% (0.0–2.0) and 3.8% (0.0–17.4) for method one and method two, respectively. Objective methods of determining tumor region in 5-aminolevulinic acid-based endoscopic PDD were developed by identifying regions in LAB color space attributed to tumors or by applying a method of convoluted neural network.
Collapse
|