1
|
Kang AJ, Rodrigues T, Patel RV, Keswani RN. Impact of Artificial Intelligence on Gastroenterology Trainee Education. Gastrointest Endosc Clin N Am 2025; 35:457-467. [PMID: 40021241 DOI: 10.1016/j.giec.2024.12.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2025]
Abstract
Artificial intelligence (AI) is transforming gastroenterology, particularly in endoscopy, which has a direct impact on trainees and their education. AI can serve as a valuable resource, providing real-time feedback and aiding in tasks like polyp detection and lesion differentiation, which are challenging for trainees. However, its implementation raises concerns about cognitive overload, overreliance, and even access disparities, which could affect training outcomes. Beyond endoscopy, AI shows promise in clinical management and interpreting diagnostic studies such as motility testing. Thoughtful adoption of AI can optimize training and prepare future trainees for the modern healthcare landscape.
Collapse
Affiliation(s)
- Anthony J Kang
- Division of Gastroenterology & Hepatology, Northwestern Feinberg School of Medicine, Chicago, IL, USA
| | - Terrance Rodrigues
- Division of Gastroenterology & Hepatology, Northwestern Feinberg School of Medicine, Chicago, IL, USA
| | - Ronak V Patel
- Division of Gastroenterology & Hepatology, Northwestern Feinberg School of Medicine, Chicago, IL, USA
| | - Rajesh N Keswani
- Division of Gastroenterology & Hepatology, Northwestern Feinberg School of Medicine, Chicago, IL, USA.
| |
Collapse
|
2
|
Jeong J, Kim S, Pan L, Hwang D, Kim D, Choi J, Kwon Y, Yi P, Jeong J, Yoo SJ. Reducing the workload of medical diagnosis through artificial intelligence: A narrative review. Medicine (Baltimore) 2025; 104:e41470. [PMID: 39928829 PMCID: PMC11813001 DOI: 10.1097/md.0000000000041470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 01/10/2025] [Accepted: 01/17/2025] [Indexed: 02/12/2025] Open
Abstract
Artificial intelligence (AI) has revolutionized medical diagnostics by enhancing efficiency, improving accuracy, and reducing variability. By alleviating the workload of medical staff, AI addresses challenges such as increasing diagnostic demands, workforce shortages, and reliance on subjective interpretation. This review examines the role of AI in reducing diagnostic workload and enhancing efficiency across medical fields from January 2019 to February 2024, identifying limitations and areas for improvement. A comprehensive PubMed search using the keywords "artificial intelligence" or "AI," "efficiency" or "workload," and "patient" or "clinical" identified 2587 articles, of which 51 were reviewed. These studies analyzed the impact of AI on radiology, pathology, and other specialties, focusing on efficiency, accuracy, and workload reduction. The final 51 articles were categorized into 4 groups based on diagnostic efficiency, where category A included studies with supporting material provided, category B consisted of those with reduced data volume, category C focused on independent AI diagnosis, and category D included studies that reported data reduction without changes in diagnostic time. In radiology and pathology, which require skilled techniques and large-scale data processing, AI improved accuracy and reduced diagnostic time by approximately 90% or more. Radiology, in particular, showed a high proportion of category C studies, as digitized data and standardized protocols facilitated independent AI diagnoses. AI has significant potential to optimize workload management, improve diagnostic efficiency, and enhance accuracy. However, challenges remain in standardizing applications and addressing ethical concerns. Integrating AI into healthcare workforce planning is essential for fostering collaboration between technology and clinicians, ultimately improving patient care.
Collapse
Affiliation(s)
- Jinseo Jeong
- College of Medicine, Dongguk University, Gyeongju-si, Republic of Korea
| | - Sohyun Kim
- College of Medicine, Dongguk University, Gyeongju-si, Republic of Korea
| | - Lian Pan
- College of Medicine, Dongguk University, Gyeongju-si, Republic of Korea
| | - Daye Hwang
- College of Medicine, Dongguk University, Gyeongju-si, Republic of Korea
| | - Dongseop Kim
- College of Medicine, Dongguk University, Gyeongju-si, Republic of Korea
| | - Jeongwon Choi
- College of Medicine, Dongguk University, Gyeongju-si, Republic of Korea
| | - Yeongkyo Kwon
- College of Medicine, Dongguk University, Gyeongju-si, Republic of Korea
| | - Pyeongro Yi
- College of Medicine, Dongguk University, Gyeongju-si, Republic of Korea
| | - Jisoo Jeong
- College of Medicine, Dongguk University, Gyeongju-si, Republic of Korea
| | - Seok-Ju Yoo
- Department of Preventive Medicine, College of Medicine, Dongguk University, Gyeongju-si, Republic of Korea
| |
Collapse
|
3
|
Chen M, Wang Y, Wang Q, Shi J, Wang H, Ye Z, Xue P, Qiao Y. Impact of human and artificial intelligence collaboration on workload reduction in medical image interpretation. NPJ Digit Med 2024; 7:349. [PMID: 39616244 PMCID: PMC11608314 DOI: 10.1038/s41746-024-01328-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Accepted: 11/04/2024] [Indexed: 01/04/2025] Open
Abstract
Clinicians face increasing workloads in medical imaging interpretation, and artificial intelligence (AI) offers potential relief. This meta-analysis evaluates the impact of human-AI collaboration on image interpretation workload. Four databases were searched for studies comparing reading time or quantity for image-based disease detection before and after AI integration. The Quality Assessment of Studies of Diagnostic Accuracy was modified to assess risk of bias. Workload reduction and relative diagnostic performance were pooled using random-effects model. Thirty-six studies were included. AI concurrent assistance reduced reading time by 27.20% (95% confidence interval, 18.22%-36.18%). The reading quantity decreased by 44.47% (40.68%-48.26%) and 61.72% (47.92%-75.52%) when AI served as the second reader and pre-screening, respectively. Overall relative sensitivity and specificity are 1.12 (1.09, 1.14) and 1.00 (1.00, 1.01), respectively. Despite these promising results, caution is warranted due to significant heterogeneity and uneven study quality.
Collapse
Affiliation(s)
- Mingyang Chen
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yuting Wang
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Qiankun Wang
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jingyi Shi
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Huike Wang
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Zichen Ye
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Peng Xue
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
| | - Youlin Qiao
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
| |
Collapse
|
4
|
George AA, Tan JL, Kovoor JG, Lee A, Stretton B, Gupta AK, Bacchi S, George B, Singh R. Artificial intelligence in capsule endoscopy: development status and future expectations. MINI-INVASIVE SURGERY 2024. [DOI: 10.20517/2574-1225.2023.102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2025]
Abstract
In this review, we aim to illustrate the state-of-the-art artificial intelligence (AI) applications in the field of capsule endoscopy. AI has made significant strides in gastrointestinal imaging, particularly in capsule endoscopy - a non-invasive procedure for capturing gastrointestinal tract images. However, manual analysis of capsule endoscopy videos is labour-intensive and error-prone, prompting the development of automated computational algorithms and AI models. While currently serving as a supplementary observer, AI has the capacity to evolve into an autonomous, integrated reading system, potentially significantly reducing capsule reading time while surpassing human accuracy. We searched Embase, Pubmed, Medline, and Cochrane databases from inception to 06 Jul 2023 for studies investigating the use of AI for capsule endoscopy and screened retrieved records for eligibility. Quantitative and qualitative data were extracted and synthesised to identify current themes. In the search, 824 articles were collected, and 291 duplicates and 31 abstracts were deleted. After a double-screening process and full-text review, 106 publications were included in the review. Themes pertaining to AI for capsule endoscopy included active gastrointestinal bleeding, erosions and ulcers, vascular lesions and angiodysplasias, polyps and tumours, inflammatory bowel disease, coeliac disease, hookworms, bowel prep assessment, and multiple lesion detection. This review provides current insights into the impact of AI on capsule endoscopy as of 2023. AI holds the potential for faster and precise readings and the prospect of autonomous image analysis. However, careful consideration of diagnostic requirements and potential challenges is crucial. The untapped potential within vision transformer technology hints at further evolution and even greater patient benefit.
Collapse
|
5
|
Jiang B, Dorosan M, Leong JWH, Ong MEH, Lam SSW, Ang TL. Development and validation of a deep learning system for detection of small bowel pathologies in capsule endoscopy: a pilot study in a Singapore institution. Singapore Med J 2024; 65:133-140. [PMID: 38527297 PMCID: PMC11060635 DOI: 10.4103/singaporemedj.smj-2023-187] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 12/10/2023] [Indexed: 03/27/2024]
Abstract
INTRODUCTION Deep learning models can assess the quality of images and discriminate among abnormalities in small bowel capsule endoscopy (CE), reducing fatigue and the time needed for diagnosis. They serve as a decision support system, partially automating the diagnosis process by providing probability predictions for abnormalities. METHODS We demonstrated the use of deep learning models in CE image analysis, specifically by piloting a bowel preparation model (BPM) and an abnormality detection model (ADM) to determine frame-level view quality and the presence of abnormal findings, respectively. We used convolutional neural network-based models pretrained on large-scale open-domain data to extract spatial features of CE images that were then used in a dense feed-forward neural network classifier. We then combined the open-source Kvasir-Capsule dataset (n = 43) and locally collected CE data (n = 29). RESULTS Model performance was compared using averaged five-fold and two-fold cross-validation for BPMs and ADMs, respectively. The best BPM model based on a pre-trained ResNet50 architecture had an area under the receiver operating characteristic and precision-recall curves of 0.969±0.008 and 0.843±0.041, respectively. The best ADM model, also based on ResNet50, had top-1 and top-2 accuracies of 84.03±0.051 and 94.78±0.028, respectively. The models could process approximately 200-250 images per second and showed good discrimination on time-critical abnormalities such as bleeding. CONCLUSION Our pilot models showed the potential to improve time to diagnosis in CE workflows. To our knowledge, our approach is unique to the Singapore context. The value of our work can be further evaluated in a pragmatic manner that is sensitive to existing clinician workflow and resource constraints.
Collapse
Affiliation(s)
- Bochao Jiang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| | - Michael Dorosan
- Health Services Research Centre, Singapore Health Services Pte Ltd, Singapore
| | - Justin Wen Hao Leong
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| | - Marcus Eng Hock Ong
- Health Services and Systems Research, Duke-NUS Medical School, Singapore
- Department of Emergency Medicine, Singapore General Hospital, Singapore
| | - Sean Shao Wei Lam
- Health Services Research Centre, Singapore Health Services Pte Ltd, Singapore
| | - Tiing Leong Ang
- Department of Gastroenterology and Hepatology, Changi General Hospital, Singapore
| |
Collapse
|
6
|
Oh DJ, Hwang Y, Kim SH, Nam JH, Jung MK, Lim YJ. Reading of small bowel capsule endoscopy after frame reduction using an artificial intelligence algorithm. BMC Gastroenterol 2024; 24:80. [PMID: 38388860 PMCID: PMC10885475 DOI: 10.1186/s12876-024-03156-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 02/04/2024] [Indexed: 02/24/2024] Open
Abstract
OBJECTIVES Poorly visualized images that appear during small bowel capsule endoscopy (SBCE) can confuse the interpretation of small bowel lesions and increase the physician's workload. Using a validated artificial intelligence (AI) algorithm that can evaluate the mucosal visualization, we aimed to assess whether SBCE reading after the removal of poorly visualized images could affect the diagnosis of SBCE. METHODS A study was conducted to analyze 90 SBCE cases in which a small bowel examination was completed. Two experienced endoscopists alternately performed two types of readings. They used the AI algorithm to remove poorly visualized images for the frame reduction reading (AI user group) and conducted whole frame reading without AI (AI non-user group) for the same patient. A poorly visualized image was defined as an image with < 50% mucosal visualization. The study outcomes were diagnostic concordance and reading time between the two groups. The SBCE diagnosis was classified as Crohn's disease, bleeding, polyp, angiodysplasia, and nonspecific finding. RESULTS The final SBCE diagnoses between the two groups showed statistically significant diagnostic concordance (k = 0.954, p < 0.001). The mean number of lesion images was 3008.5 ± 9964.9 in the AI non-user group and 1401.7 ± 4811.3 in the AI user group. There were no cases in which lesions were completely removed. Compared with the AI non-user group (120.9 min), the reading time was reduced by 35.6% in the AI user group (77.9 min). CONCLUSIONS SBCE reading after reducing poorly visualized frames using the AI algorithm did not have a negative effect on the final diagnosis. SBCE reading method integrated with frame reduction and mucosal visualization evaluation will help improve AI-assisted SBCE interpretation.
Collapse
Affiliation(s)
- Dong Jun Oh
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, 27 Dongguk-ro, Ilsandong-gu, Goyang, 10326, Republic of Korea
| | - Youngbae Hwang
- Department of Electronics Engineering, Chungbuk National University, Cheongju, Republic of Korea
| | - Sang Hoon Kim
- Department of Internal Medicine, Chung-Ang University Gwangmyeong Hospital, Chung-Ang University College of Medicine, Gwangmyeong, Republic of Korea
| | - Ji Hyung Nam
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, 27 Dongguk-ro, Ilsandong-gu, Goyang, 10326, Republic of Korea
| | - Min Kyu Jung
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Kyungpook National University Hospital, Daegu, Republic of Korea
| | - Yun Jeong Lim
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, 27 Dongguk-ro, Ilsandong-gu, Goyang, 10326, Republic of Korea.
| |
Collapse
|
7
|
Bordbar M, Helfroush MS, Danyali H, Ejtehadi F. Wireless capsule endoscopy multiclass classification using three-dimensional deep convolutional neural network model. Biomed Eng Online 2023; 22:124. [PMID: 38098015 PMCID: PMC10722702 DOI: 10.1186/s12938-023-01186-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 11/29/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND Wireless capsule endoscopy (WCE) is a patient-friendly and non-invasive technology that scans the whole of the gastrointestinal tract, including difficult-to-access regions like the small bowel. Major drawback of this technology is that the visual inspection of a large number of video frames produced during each examination makes the physician diagnosis process tedious and prone to error. Several computer-aided diagnosis (CAD) systems, such as deep network models, have been developed for the automatic recognition of abnormalities in WCE frames. Nevertheless, most of these studies have only focused on spatial information within individual WCE frames, missing the crucial temporal data within consecutive frames. METHODS In this article, an automatic multiclass classification system based on a three-dimensional deep convolutional neural network (3D-CNN) is proposed, which utilizes the spatiotemporal information to facilitate the WCE diagnosis process. The 3D-CNN model fed with a series of sequential WCE frames in contrast to the two-dimensional (2D) model, which exploits frames as independent ones. Moreover, the proposed 3D deep model is compared with some pre-trained networks. The proposed models are trained and evaluated with 29 subject WCE videos (14,691 frames before augmentation). The performance advantages of 3D-CNN over 2D-CNN and pre-trained networks are verified in terms of sensitivity, specificity, and accuracy. RESULTS 3D-CNN outperforms the 2D technique in all evaluation metrics (sensitivity: 98.92 vs. 98.05, specificity: 99.50 vs. 86.94, accuracy: 99.20 vs. 92.60). In conclusion, a novel 3D-CNN model for lesion detection in WCE frames is proposed in this study. CONCLUSION The results indicate the performance of 3D-CNN over 2D-CNN and some well-known pre-trained classifier networks. The proposed 3D-CNN model uses the rich temporal information in adjacent frames as well as spatial data to develop an accurate and efficient model.
Collapse
Affiliation(s)
- Mehrdokht Bordbar
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran
| | | | - Habibollah Danyali
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran
| | - Fardad Ejtehadi
- Department of Internal Medicine, Gastroenterohepatology Research Center, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
8
|
Chung J, Oh DJ, Park J, Kim SH, Lim YJ. Automatic Classification of GI Organs in Wireless Capsule Endoscopy Using a No-Code Platform-Based Deep Learning Model. Diagnostics (Basel) 2023; 13:diagnostics13081389. [PMID: 37189489 DOI: 10.3390/diagnostics13081389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 04/03/2023] [Accepted: 04/10/2023] [Indexed: 05/17/2023] Open
Abstract
The first step in reading a capsule endoscopy (CE) is determining the gastrointestinal (GI) organ. Because CE produces too many inappropriate and repetitive images, automatic organ classification cannot be directly applied to CE videos. In this study, we developed a deep learning algorithm to classify GI organs (the esophagus, stomach, small bowel, and colon) using a no-code platform, applied it to CE videos, and proposed a novel method to visualize the transitional area of each GI organ. We used training data (37,307 images from 24 CE videos) and test data (39,781 images from 30 CE videos) for model development. This model was validated using 100 CE videos that included "normal", "blood", "inflamed", "vascular", and "polypoid" lesions. Our model achieved an overall accuracy of 0.98, precision of 0.89, recall of 0.97, and F1 score of 0.92. When we validated this model relative to the 100 CE videos, it produced average accuracies for the esophagus, stomach, small bowel, and colon of 0.98, 0.96, 0.87, and 0.87, respectively. Increasing the AI score's cut-off improved most performance metrics in each organ (p < 0.05). To locate a transitional area, we visualized the predicted results over time, and setting the cut-off of the AI score to 99.9% resulted in a better intuitive presentation than the baseline. In conclusion, the GI organ classification AI model demonstrated high accuracy on CE videos. The transitional area could be more easily located by adjusting the cut-off of the AI score and visualization of its result over time.
Collapse
Affiliation(s)
- Joowon Chung
- Department of Internal Medicine, Nowon Eulji Medical Center, Eulji University School of Medicine, Seoul 01830, Republic of Korea
| | - Dong Jun Oh
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea
| | - Junseok Park
- Department of Internal Medicine, Digestive Disease Center, Institute for Digestive Research, Soonchunhyang University College of Medicine, Seoul 04401, Republic of Korea
| | - Su Hwan Kim
- Department of Internal Medicine, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul 07061, Republic of Korea
| | - Yun Jeong Lim
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea
| |
Collapse
|
9
|
Son G, Eo T, An J, Oh DJ, Shin Y, Rha H, Kim YJ, Lim YJ, Hwang D. Small Bowel Detection for Wireless Capsule Endoscopy Using Convolutional Neural Networks with Temporal Filtering. Diagnostics (Basel) 2022; 12:diagnostics12081858. [PMID: 36010210 PMCID: PMC9406835 DOI: 10.3390/diagnostics12081858] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/28/2022] [Accepted: 07/28/2022] [Indexed: 12/22/2022] Open
Abstract
By automatically classifying the stomach, small bowel, and colon, the reading time of the wireless capsule endoscopy (WCE) can be reduced. In addition, it is an essential first preprocessing step to localize the small bowel in order to apply automated small bowel lesion detection algorithms based on deep learning. The purpose of the study was to develop an automated small bowel detection method from long untrimmed videos captured from WCE. Through this, the stomach and colon can also be distinguished. The proposed method is based on a convolutional neural network (CNN) with a temporal filtering on the predicted probabilities from the CNN. For CNN, we use a ResNet50 model to classify three organs including stomach, small bowel, and colon. The hybrid temporal filter consisting of a Savitzky–Golay filter and a median filter is applied to the temporal probabilities for the “small bowel” class. After filtering, the small bowel and the other two organs are differentiated with thresholding. The study was conducted on dataset of 200 patients (100 normal and 100 abnormal WCE cases), which was divided into a training set of 140 cases, a validation set of 20 cases, and a test set of 40 cases. For the test set of 40 patients (20 normal and 20 abnormal WCE cases), the proposed method showed accuracy of 99.8% in binary classification for the small bowel. Transition time errors for gastrointestinal tracts were only 38.8 ± 25.8 seconds for the transition between stomach and small bowel and 32.0 ± 19.1 seconds for the transition between small bowel and colon, compared to the ground truth organ transition points marked by two experienced gastroenterologists.
Collapse
Affiliation(s)
- Geonhui Son
- School of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea; (G.S.); (T.E.); (J.A.); (Y.S.); (H.R.)
| | - Taejoon Eo
- School of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea; (G.S.); (T.E.); (J.A.); (Y.S.); (H.R.)
| | - Jiwoong An
- School of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea; (G.S.); (T.E.); (J.A.); (Y.S.); (H.R.)
| | - Dong Jun Oh
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Korea;
| | - Yejee Shin
- School of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea; (G.S.); (T.E.); (J.A.); (Y.S.); (H.R.)
| | - Hyenogseop Rha
- School of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea; (G.S.); (T.E.); (J.A.); (Y.S.); (H.R.)
| | - You Jin Kim
- IntroMedic, Capsule Endoscopy Medical Device Manufacturer, Seoul 08375, Korea;
| | - Yun Jeong Lim
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Korea;
- Correspondence: (Y.J.L.); (D.H.)
| | - Dosik Hwang
- School of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea; (G.S.); (T.E.); (J.A.); (Y.S.); (H.R.)
- Center for Healthcare Robotics, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul 02792, Korea
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul 03722, Korea
- Department of Radiology and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul 03722, Korea
- Correspondence: (Y.J.L.); (D.H.)
| |
Collapse
|
10
|
Hosoe N, Horie T, Tojo A, Sakurai H, Hayashi Y, Limpias Kamiya KJL, Sujino T, Takabayashi K, Ogata H, Kanai T. Development of a Deep-Learning Algorithm for Small Bowel-Lesion Detection and a Study of the Improvement in the False-Positive Rate. J Clin Med 2022; 11:3682. [PMID: 35806969 PMCID: PMC9267395 DOI: 10.3390/jcm11133682] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 06/23/2022] [Accepted: 06/24/2022] [Indexed: 02/04/2023] Open
Abstract
Deep learning has recently been gaining attention as a promising technology to improve the identification of lesions, and deep-learning algorithms for lesion detection have been actively developed in small-bowel capsule endoscopy (SBCE). We developed a detection algorithm for abnormal findings by deep learning (convolutional neural network) the SBCE imaging data of 30 cases with abnormal findings. To enable the detection of a wide variety of abnormal findings, the training data were balanced to include all major findings identified in SBCE (bleeding, angiodysplasia, ulceration, and neoplastic lesions). To reduce the false-positive rate, "findings that may be responsible for hemorrhage" and "findings that may require therapeutic intervention" were extracted from the images of abnormal findings and added to the training dataset. For the performance evaluation, the sensitivity and the specificity were calculated using 271 detectable findings in 35 cases. The sensitivity was calculated using 68,494 images of non-abnormal findings. The sensitivity and specificity were 93.4% and 97.8%, respectively. The average number of images detected by the algorithm as having abnormal findings was 7514. We developed an image-reading support system using deep learning for SBCE and obtained a good detection performance.
Collapse
Affiliation(s)
- Naoki Hosoe
- Center for Diagnostic and Therapeutic Endoscopy, Keio University School of Medicine, 35 Shinanomachi, Shinjuku, Tokyo 160-8582, Japan; (T.S.); (K.T.); (H.O.)
| | - Tomofumi Horie
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Keio University School of Medicine, Tokyo 160-8582, Japan; (T.H.); (A.T.); (H.S.); (Y.H.); (K.J.-L.L.K.); (T.K.)
| | - Anna Tojo
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Keio University School of Medicine, Tokyo 160-8582, Japan; (T.H.); (A.T.); (H.S.); (Y.H.); (K.J.-L.L.K.); (T.K.)
| | - Hinako Sakurai
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Keio University School of Medicine, Tokyo 160-8582, Japan; (T.H.); (A.T.); (H.S.); (Y.H.); (K.J.-L.L.K.); (T.K.)
| | - Yukie Hayashi
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Keio University School of Medicine, Tokyo 160-8582, Japan; (T.H.); (A.T.); (H.S.); (Y.H.); (K.J.-L.L.K.); (T.K.)
| | - Kenji Jose-Luis Limpias Kamiya
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Keio University School of Medicine, Tokyo 160-8582, Japan; (T.H.); (A.T.); (H.S.); (Y.H.); (K.J.-L.L.K.); (T.K.)
| | - Tomohisa Sujino
- Center for Diagnostic and Therapeutic Endoscopy, Keio University School of Medicine, 35 Shinanomachi, Shinjuku, Tokyo 160-8582, Japan; (T.S.); (K.T.); (H.O.)
| | - Kaoru Takabayashi
- Center for Diagnostic and Therapeutic Endoscopy, Keio University School of Medicine, 35 Shinanomachi, Shinjuku, Tokyo 160-8582, Japan; (T.S.); (K.T.); (H.O.)
| | - Haruhiko Ogata
- Center for Diagnostic and Therapeutic Endoscopy, Keio University School of Medicine, 35 Shinanomachi, Shinjuku, Tokyo 160-8582, Japan; (T.S.); (K.T.); (H.O.)
| | - Takanori Kanai
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Keio University School of Medicine, Tokyo 160-8582, Japan; (T.H.); (A.T.); (H.S.); (Y.H.); (K.J.-L.L.K.); (T.K.)
| |
Collapse
|
11
|
Higuchi N, Hiraga H, Sasaki Y, Hiraga N, Igarashi S, Hasui K, Ogasawara K, Maeda T, Murai Y, Tatsuta T, Kikuchi H, Chinda D, Mikami T, Matsuzaka M, Sakuraba H, Fukuda S. Automated evaluation of colon capsule endoscopic severity of ulcerative colitis using ResNet50. PLoS One 2022; 17:e0269728. [PMID: 35687553 PMCID: PMC9187078 DOI: 10.1371/journal.pone.0269728] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 05/26/2022] [Indexed: 12/19/2022] Open
Abstract
Capsule endoscopy has been widely used as a non-invasive diagnostic tool for small or large intestinal lesions. In recent years, automated lesion detection systems using machine learning have been devised. This study aimed to develop an automated system for capsule endoscopic severity in patients with ulcerative colitis along the entire length of the colon using ResNet50. Capsule endoscopy videos from patients with ulcerative colitis were collected prospectively. Each single examination video file was partitioned into four segments: the cecum and ascending colon, transverse colon, descending and sigmoid colon, and rectum. Fifty still pictures (576 × 576 pixels) were extracted from each partitioned video. A patch (128 × 128 pixels) was trimmed from the still picture at every 32-pixel-strides. A total of 739,021 patch images were manually classified into six categories: 0) Mayo endoscopic subscore (MES) 0, 1) MES1, 2) MES2, 3) MES3, 4) inadequate quality for evaluation, and 5) ileal mucosa. ResNet50, a deep learning framework, was trained using 483,644 datasets and validated using 255,377 independent datasets. In total, 31 capsule endoscopy videos from 22 patients were collected. The accuracy rates of the training and validation datasets were 0.992 and 0.973, respectively. An automated evaluation system for the capsule endoscopic severity of ulcerative colitis was developed. This could be a useful tool for assessing topographic disease activity, thus decreasing the burden of image interpretation on endoscopists.
Collapse
Affiliation(s)
- Naoki Higuchi
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Hiroto Hiraga
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
- * E-mail:
| | - Yoshihiro Sasaki
- Department of Medical Informatics, Hirosaki University Hospital, Hirosaki, Japan
| | - Noriko Hiraga
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Shohei Igarashi
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Keisuke Hasui
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Kohei Ogasawara
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Takato Maeda
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Yasuhisa Murai
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Tetsuya Tatsuta
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Hidezumi Kikuchi
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Daisuke Chinda
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Tatsuya Mikami
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Masashi Matsuzaka
- Department of Medical Informatics, Hirosaki University Hospital, Hirosaki, Japan
| | - Hirotake Sakuraba
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| | - Shinsaku Fukuda
- Department of Gastroenterology and Hematology, Hirosaki University Graduate School of Medicine, Hirosaki, Japan
| |
Collapse
|
12
|
Christou CD, Tsoulfas G. Challenges and opportunities in the application of artificial intelligence in gastroenterology and hepatology. World J Gastroenterol 2021; 27:6191-6223. [PMID: 34712027 PMCID: PMC8515803 DOI: 10.3748/wjg.v27.i37.6191] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 05/06/2021] [Accepted: 08/31/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is an umbrella term used to describe a cluster of interrelated fields. Machine learning (ML) refers to a model that learns from past data to predict future data. Medicine and particularly gastroenterology and hepatology, are data-rich fields with extensive data repositories, and therefore fruitful ground for AI/ML-based software applications. In this study, we comprehensively review the current applications of AI/ML-based models in these fields and the opportunities that arise from their application. Specifically, we refer to the applications of AI/ML-based models in prevention, diagnosis, management, and prognosis of gastrointestinal bleeding, inflammatory bowel diseases, gastrointestinal premalignant and malignant lesions, other nonmalignant gastrointestinal lesions and diseases, hepatitis B and C infection, chronic liver diseases, hepatocellular carcinoma, cholangiocarcinoma, and primary sclerosing cholangitis. At the same time, we identify the major challenges that restrain the widespread use of these models in healthcare in an effort to explore ways to overcome them. Notably, we elaborate on the concerns regarding intrinsic biases, data protection, cybersecurity, intellectual property, liability, ethical challenges, and transparency. Even at a slower pace than anticipated, AI is infiltrating the healthcare industry. AI in healthcare will become a reality, and every physician will have to engage with it by necessity.
Collapse
Affiliation(s)
- Chrysanthos D Christou
- Organ Transplant Unit, Hippokration General Hospital, Aristotle University of Thessaloniki, Thessaloniki 54622, Greece
| | - Georgios Tsoulfas
- Organ Transplant Unit, Hippokration General Hospital, Aristotle University of Thessaloniki, Thessaloniki 54622, Greece
| |
Collapse
|
13
|
Artificial Intelligence in Capsule Endoscopy: A Practical Guide to Its Past and Future Challenges. Diagnostics (Basel) 2021; 11:diagnostics11091722. [PMID: 34574063 PMCID: PMC8469774 DOI: 10.3390/diagnostics11091722] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 09/15/2021] [Accepted: 09/17/2021] [Indexed: 12/20/2022] Open
Abstract
Artificial intelligence (AI) has revolutionized the medical diagnostic process of various diseases. Since the manual reading of capsule endoscopy videos is a time-intensive, error-prone process, computerized algorithms have been introduced to automate this process. Over the past decade, the evolution of convolutional neural network (CNN) enabled AI to detect multiple lesions simultaneously with increasing accuracy and sensitivity. Difficulty in validating CNN performance and unique characteristics of capsule endoscopy images make computer-aided reading systems in capsule endoscopy still on a preclinical level. Although AI technology can be used as an auxiliary second observer in capsule endoscopy, it is expected that in the near future, it will effectively reduce the reading time and ultimately become an independent, integrated reading system.
Collapse
|
14
|
A Current and Newly Proposed Artificial Intelligence Algorithm for Reading Small Bowel Capsule Endoscopy. Diagnostics (Basel) 2021; 11:diagnostics11071183. [PMID: 34209948 PMCID: PMC8306692 DOI: 10.3390/diagnostics11071183] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 06/26/2021] [Accepted: 06/28/2021] [Indexed: 12/09/2022] Open
Abstract
Small bowel capsule endoscopy (SBCE) is one of the most useful methods for diagnosing small bowel mucosal lesions. However, it takes a long time to interpret the capsule images. To solve this problem, artificial intelligence (AI) algorithms for SBCE readings are being actively studied. In this article, we analyzed several studies that applied AI algorithms to SBCE readings, such as automatic lesion detection, automatic classification of bowel cleanliness, and automatic compartmentalization of small bowels. In addition to automatic lesion detection using AI algorithms, a new direction of AI algorithms related to shorter reading times and improved lesion detection accuracy should be considered. Therefore, it is necessary to develop an integrated AI algorithm composed of algorithms with various functions in order to be used in clinical practice.
Collapse
|
15
|
Lu YF, Lyu B. Current situation and prospect of artificial intelligence application in endoscopic diagnosis of Helicobacter pylori infection. Artif Intell Gastrointest Endosc 2021; 2:50-62. [DOI: 10.37126/aige.v2.i3.50] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/02/2021] [Revised: 06/01/2021] [Accepted: 06/18/2021] [Indexed: 02/06/2023] Open
Abstract
With the appearance and prevalence of deep learning, artificial intelligence (AI) has been broadly studied and made great progress in various fields of medicine, including gastroenterology. Helicobacter pylori (H. pylori), closely associated with various digestive and extradigestive diseases, has a high infection rate worldwide. Endoscopic surveillance can evaluate H. pylori infection situations and predict the risk of gastric cancer, but there is no objective diagnostic criteria to eliminate the differences between operators. The computer-aided diagnosis system based on AI technology has demonstrated excellent performance for the diagnosis of H. pylori infection, which is superior to novice endoscopists and similar to skilled. Compared with the visual diagnosis of H. pylori infection by endoscopists, AI possesses voluminous advantages: High accuracy, high efficiency, high quality control, high objectivity, and high-effect teaching. This review summarizes the previous and recent studies on AI-assisted diagnosis of H. pylori infection, points out the limitations, and puts forward prospect for future research.
Collapse
Affiliation(s)
- Yi-Fan Lu
- Department of Gastroenterology, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou 310006, Zhejiang Province, China
| | - Bin Lyu
- Department of Gastroenterology, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou 310006, Zhejiang Province, China
| |
Collapse
|
16
|
Nam JH, Oh DJ, Lee S, Song HJ, Lim YJ. Development and Verification of a Deep Learning Algorithm to Evaluate Small-Bowel Preparation Quality. Diagnostics (Basel) 2021; 11:diagnostics11061127. [PMID: 34203093 PMCID: PMC8234509 DOI: 10.3390/diagnostics11061127] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 06/03/2021] [Accepted: 06/19/2021] [Indexed: 01/31/2023] Open
Abstract
Capsule endoscopy (CE) quality control requires an objective scoring system to evaluate the preparation of the small bowel (SB). We propose a deep learning algorithm to calculate SB cleansing scores and verify the algorithm’s performance. A 5-point scoring system based on clarity of mucosal visualization was used to develop the deep learning algorithm (400,000 frames; 280,000 for training and 120,000 for testing). External validation was performed using additional CE cases (n = 50), and average cleansing scores (1.0 to 5.0) calculated using the algorithm were compared to clinical grades (A to C) assigned by clinicians. Test results obtained using 120,000 frames exhibited 93% accuracy. The separate CE case exhibited substantial agreement between the deep learning algorithm scores and clinicians’ assessments (Cohen’s kappa: 0.672). In the external validation, the cleansing score decreased with worsening clinical grade (scores of 3.9, 3.2, and 2.5 for grades A, B, and C, respectively, p < 0.001). Receiver operating characteristic curve analysis revealed that a cleansing score cut-off of 2.95 indicated clinically adequate preparation. This algorithm provides an objective and automated cleansing score for evaluating SB preparation for CE. The results of this study will serve as clinical evidence supporting the practical use of deep learning algorithms for evaluating SB preparation quality.
Collapse
Affiliation(s)
- Ji Hyung Nam
- Division of Gastroenterology, Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Korea; (J.H.N.); (D.J.O.); (S.L.)
| | - Dong Jun Oh
- Division of Gastroenterology, Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Korea; (J.H.N.); (D.J.O.); (S.L.)
| | - Sumin Lee
- Division of Gastroenterology, Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Korea; (J.H.N.); (D.J.O.); (S.L.)
| | - Hyun Joo Song
- Division of Gastroenterology, Department of Internal Medicine, Jeju National University School of Medicine, Jeju 63241, Korea;
| | - Yun Jeong Lim
- Division of Gastroenterology, Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Korea; (J.H.N.); (D.J.O.); (S.L.)
- Correspondence: ; Tel.: +82-31-961-7133
| |
Collapse
|
17
|
Yen SY, Huang HE, Lien GS, Liu CW, Chu CF, Huang WM, Suk FM. Automatic lumen detection and magnetic alignment control for magnetic-assisted capsule colonoscope system optimization. Sci Rep 2021; 11:6460. [PMID: 33742067 PMCID: PMC7979719 DOI: 10.1038/s41598-021-86101-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Accepted: 03/10/2021] [Indexed: 02/07/2023] Open
Abstract
We developed a magnetic-assisted capsule colonoscope system with integration of computer vision-based object detection and an alignment control scheme. Two convolutional neural network models A and B for lumen identification were trained on an endoscopic dataset of 9080 images. In the lumen alignment experiment, models C and D used a simulated dataset of 8414 images. The models were evaluated using validation indexes for recall (R), precision (P), mean average precision (mAP), and F1 score. Predictive performance was evaluated with the area under the P-R curve. Adjustments of pitch and yaw angles and alignment control time were analyzed in the alignment experiment. Model D had the best predictive performance. Its R, P, mAP, and F1 score were 0.964, 0.961, 0.961, and 0.963, respectively, when the area of overlap/area of union was at 0.3. In the lumen alignment experiment, the mean degrees of adjustment for yaw and pitch in 160 trials were 21.70° and 13.78°, respectively. Mean alignment control time was 0.902 s. Finally, we compared the cecal intubation time between semi-automated and manual navigation in 20 trials. The average cecal intubation time of manual navigation and semi-automated navigation were 9 min 28.41 s and 7 min 23.61 s, respectively. The automatic lumen detection model, which was trained using a deep learning algorithm, demonstrated high performance in each validation index.
Collapse
Affiliation(s)
- Sheng-Yang Yen
- Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
| | - Hao-En Huang
- Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
| | - Gi-Shih Lien
- Division of Gastroenterology, Department of Internal Medicine, Taipei Municipal Wan Fang Hospital, Taipei Medical University, No. 111, Section 3, Xing Long Road, Taipei, 116, Taiwan.,Department of Internal Medicine, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
| | - Chih-Wen Liu
- Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
| | - Chia-Feng Chu
- Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
| | - Wei-Ming Huang
- Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
| | - Fat-Moon Suk
- Division of Gastroenterology, Department of Internal Medicine, Taipei Municipal Wan Fang Hospital, Taipei Medical University, No. 111, Section 3, Xing Long Road, Taipei, 116, Taiwan. .,Department of Internal Medicine, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan.
| |
Collapse
|
18
|
Conley TE, Fiske J, Townsend T, Collins P, Bond A. COVID-19 and the challenges faced by gastroenterology trainees: time for capsule endoscopy training? Frontline Gastroenterol 2021; 12:299-302. [PMID: 34249315 PMCID: PMC8231429 DOI: 10.1136/flgastro-2020-101704] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 12/19/2020] [Accepted: 12/29/2020] [Indexed: 02/04/2023] Open
Affiliation(s)
- Thomas Edward Conley
- Gastroenterology, Royal Liverpool and Broadgreen University Hospitals NHS Trust, Liverpool, UK
| | - Joseph Fiske
- Gastroenterology, Royal Liverpool and Broadgreen University Hospitals NHS Trust, Liverpool, UK
| | - Tristan Townsend
- Gastroenterology, Royal Liverpool and Broadgreen University Hospitals NHS Trust, Liverpool, UK
| | - Paul Collins
- Gastroenterology, Royal Liverpool and Broadgreen University Hospitals NHS Trust, Liverpool, UK
| | - Ashley Bond
- Gastroenterology, Royal Liverpool and Broadgreen University Hospitals NHS Trust, Liverpool, UK
| |
Collapse
|