1
|
Daneshpajooh V, Ahmad D, Toth J, Bascom R, Higgins WE. Automatic lesion detection for narrow-band imaging bronchoscopy. J Med Imaging (Bellingham) 2024; 11:036002. [PMID: 38827776 PMCID: PMC11138083 DOI: 10.1117/1.jmi.11.3.036002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 04/04/2024] [Accepted: 05/14/2024] [Indexed: 06/05/2024] Open
Abstract
Purpose Early detection of cancer is crucial for lung cancer patients, as it determines disease prognosis. Lung cancer typically starts as bronchial lesions along the airway walls. Recent research has indicated that narrow-band imaging (NBI) bronchoscopy enables more effective bronchial lesion detection than other bronchoscopic modalities. Unfortunately, NBI video can be hard to interpret because physicians currently are forced to perform a time-consuming subjective visual search to detect bronchial lesions in a long airway-exam video. As a result, NBI bronchoscopy is not regularly used in practice. To alleviate this problem, we propose an automatic two-stage real-time method for bronchial lesion detection in NBI video and perform a first-of-its-kind pilot study of the method using NBI airway exam video collected at our institution. Approach Given a patient's NBI video, the first method stage entails a deep-learning-based object detection network coupled with a multiframe abnormality measure to locate candidate lesions on each video frame. The second method stage then draws upon a Siamese network and a Kalman filter to track candidate lesions over multiple frames to arrive at final lesion decisions. Results Tests drawing on 23 patient NBI airway exam videos indicate that the method can process an incoming video stream at a real-time frame rate, thereby making the method viable for real-time inspection during a live bronchoscopic airway exam. Furthermore, our studies showed a 93% sensitivity and 86% specificity for lesion detection; this compares favorably to a sensitivity and specificity of 80% and 84% achieved over a series of recent pooled clinical studies using the current time-consuming subjective clinical approach. Conclusion The method shows potential for robust lesion detection in NBI video at a real-time frame rate. Therefore, it could help enable more common use of NBI bronchoscopy for bronchial lesion detection.
Collapse
Affiliation(s)
- Vahid Daneshpajooh
- The Pennsylvania State University, School of Electrical Engineering and Computer Science, University Park, Pennsylvania, United States
| | - Danish Ahmad
- The Pennsylvania State University, College of Medicine, Hershey, Pennsylvania, United States
| | - Jennifer Toth
- The Pennsylvania State University, College of Medicine, Hershey, Pennsylvania, United States
| | - Rebecca Bascom
- The Pennsylvania State University, College of Medicine, Hershey, Pennsylvania, United States
| | - William E. Higgins
- The Pennsylvania State University, School of Electrical Engineering and Computer Science, University Park, Pennsylvania, United States
| |
Collapse
|
2
|
Horovistiz A, Oliveira M, Araújo H. Computer vision-based solutions to overcome the limitations of wireless capsule endoscopy. J Med Eng Technol 2023; 47:242-261. [PMID: 38231042 DOI: 10.1080/03091902.2024.2302025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 12/28/2023] [Indexed: 01/18/2024]
Abstract
Endoscopic investigation plays a critical role in the diagnosis of gastrointestinal (GI) diseases. Since 2001, Wireless Capsule Endoscopy (WCE) has been available for small bowel exploration and is in continuous development. Over the last decade, WCE has achieved impressive improvements in areas such as miniaturisation, image quality and battery life. As a result, WCE is currently a very useful alternative to wired enteroscopy in the investigation of various small bowel abnormalities and has the potential to become the leading screening technique for the entire gastrointestinal tract. However, commercial solutions still have several limitations, namely incomplete examination and limited diagnostic capacity. These deficiencies are related to technical issues, such as image quality, motion estimation and power consumption management. Computational methods, based on image processing and analysis, can help to overcome these challenges and reduce both the time required by reviewers and human interpretation errors. Research groups have proposed a series of methods including algorithms for locating the capsule or lesion, assessing intestinal motility and improving image quality.In this work, we provide a critical review of computational vision-based methods for WCE image analysis aimed at overcoming the technological challenges of capsules. This article also reviews several representative public datasets used to evaluate the performance of WCE techniques and methods. Finally, some promising solutions of computational methods based on the analysis of multiple-camera endoscopic images are presented.
Collapse
Affiliation(s)
- Ana Horovistiz
- Institute of Systems and Robotics, University of Coimbra, Coimbra, Portugal
| | - Marina Oliveira
- Institute of Systems and Robotics, University of Coimbra, Coimbra, Portugal
- Department of Electrical and Computer Engineering (DEEC), Faculty of Sciences and Technology, University of Coimbra, Coimbra, Portugal
| | - Helder Araújo
- Institute of Systems and Robotics, University of Coimbra, Coimbra, Portugal
- Department of Electrical and Computer Engineering (DEEC), Faculty of Sciences and Technology, University of Coimbra, Coimbra, Portugal
| |
Collapse
|
3
|
A Novel Framework of Manifold Learning Cascade-Clustering for the Informative Frame Selection. Diagnostics (Basel) 2023; 13:diagnostics13061151. [PMID: 36980459 PMCID: PMC10047422 DOI: 10.3390/diagnostics13061151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 03/05/2023] [Accepted: 03/10/2023] [Indexed: 03/19/2023] Open
Abstract
Narrow band imaging is an established non-invasive tool used for the early detection of laryngeal cancer in surveillance examinations. Most images produced from the examination are useless, such as blurred, specular reflection, and underexposed. Removing the uninformative frames is vital to improve detection accuracy and speed up computer-aided diagnosis. It often takes a lot of time for the physician to manually inspect the informative frames. This issue is commonly addressed by a classifier with task-specific categories of the uninformative frames. However, the definition of the uninformative categories is ambiguous, and tedious labeling still cannot be avoided. Here, we show that a novel unsupervised scheme is comparable to the current benchmarks on the dataset of NBI-InfFrames. We extract feature embedding using a vanilla neural network (VGG16) and introduce a new dimensionality reduction method called UMAP that distinguishes the feature embedding in the lower-dimensional space. Along with the proposed automatic cluster labeling algorithm and cost function in Bayesian optimization, the proposed method coupled with UMAP achieves state-of-the-art performance. It outperforms the baseline by 12% absolute. The overall median recall of the proposed method is currently the highest, 96%. Our results demonstrate the effectiveness of the proposed scheme and the robustness of detecting the informative frames. It also suggests the patterns embedded in the data help develop flexible algorithms that do not require manual labeling.
Collapse
|
4
|
Shen T, Li X. Automatic polyp image segmentation and cancer prediction based on deep learning. Front Oncol 2023; 12:1087438. [PMID: 36713495 PMCID: PMC9878560 DOI: 10.3389/fonc.2022.1087438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 12/22/2022] [Indexed: 01/15/2023] Open
Abstract
The similar shape and texture of colonic polyps and normal mucosal tissues lead to low accuracy of medical image segmentation algorithms. To solve these problems, we proposed a polyp image segmentation algorithm based on deep learning technology, which combines a HarDNet module, attention module, and multi-scale coding module with the U-Net network as the basic framework, including two stages of coding and decoding. In the encoder stage, HarDNet68 is used as the main backbone network to extract features using four null space convolutional pooling pyramids while improving the inference speed and computational efficiency; the attention mechanism module is added to the encoding and decoding network; then the model can learn the global and local feature information of the polyp image, thus having the ability to process information in both spatial and channel dimensions, to solve the problem of information loss in the encoding stage of the network and improving the performance of the segmentation network. Through comparative analysis with other algorithms, we can find that the network of this paper has a certain degree of improvement in segmentation accuracy and operation speed, which can effectively assist physicians in removing abnormal colorectal tissues and thus reduce the probability of polyp cancer, and improve the survival rate and quality of life of patients. Also, it has good generalization ability, which can provide technical support and prevention for colon cancer.
Collapse
Affiliation(s)
- Tongping Shen
- School of Information Engineering, Anhui University of Chinese Medicine, Hefei, China,Graduate School, Angeles University Foundation, Angeles, Philippines,*Correspondence: Tongping Shen,
| | - Xueguang Li
- School of Computer Science and Technology, Henan Institute of Technology, Xinxiang, China
| |
Collapse
|
5
|
Zhu M, Chen Z, Yuan Y. DSI-Net: Deep Synergistic Interaction Network for Joint Classification and Segmentation With Endoscope Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3315-3325. [PMID: 34033538 DOI: 10.1109/tmi.2021.3083586] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Automatic classification and segmentation of wireless capsule endoscope (WCE) images are two clinically significant and relevant tasks in a computer-aided diagnosis system for gastrointestinal diseases. Most of existing approaches, however, considered these two tasks individually and ignored their complementary information, leading to limited performance. To overcome this bottleneck, we propose a deep synergistic interaction network (DSI-Net) for joint classification and segmentation with WCE images, which mainly consists of the classification branch (C-Branch), the coarse segmentation (CS-Branch) and the fine segmentation branches (FS-Branch). In order to facilitate the classification task with the segmentation knowledge, a lesion location mining (LLM) module is devised in C-Branch to accurately highlight lesion regions through mining neglected lesion areas and erasing misclassified background areas. To assist the segmentation task with the classification prior, we propose a category-guided feature generation (CFG) module in FS-Branch to improve pixel representation by leveraging the category prototypes of C-Branch to obtain the category-aware features. In such way, these modules enable the deep synergistic interaction between these two tasks. In addition, we introduce a task interaction loss to enhance the mutual supervision between the classification and segmentation tasks and guarantee the consistency of their predictions. Relying on the proposed deep synergistic interaction mechanism, DSI-Net achieves superior classification and segmentation performance on public dataset in comparison with state-of-the-art methods. The source code is available at https://github.com/CityU-AIM-Group/DSI-Net.
Collapse
|
6
|
Yang Y, Li YX, Yao RQ, Du XH, Ren C. Artificial intelligence in small intestinal diseases: Application and prospects. World J Gastroenterol 2021; 27:3734-3747. [PMID: 34321840 PMCID: PMC8291013 DOI: 10.3748/wjg.v27.i25.3734] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 04/09/2021] [Accepted: 05/08/2021] [Indexed: 02/06/2023] Open
Abstract
The small intestine is located in the middle of the gastrointestinal tract, so small intestinal diseases are more difficult to diagnose than other gastrointestinal diseases. However, with the extensive application of artificial intelligence in the field of small intestinal diseases, with its efficient learning capacities and computational power, artificial intelligence plays an important role in the auxiliary diagnosis and prognosis prediction based on the capsule endoscopy and other examination methods, which improves the accuracy of diagnosis and prediction and reduces the workload of doctors. In this review, a comprehensive retrieval was performed on articles published up to October 2020 from PubMed and other databases. Thereby the application status of artificial intelligence in small intestinal diseases was systematically introduced, and the challenges and prospects in this field were also analyzed.
Collapse
Affiliation(s)
- Yu Yang
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Yu-Xuan Li
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ren-Qi Yao
- Trauma Research Center, The Fourth Medical Center and Medical Innovation Research Division of the Chinese People‘s Liberation Army General Hospital, Beijing 100048, China
- Department of Burn Surgery, Changhai Hospital, Naval Medical University, Shanghai 200433, China
| | - Xiao-Hui Du
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Chao Ren
- Trauma Research Center, The Fourth Medical Center and Medical Innovation Research Division of the Chinese People‘s Liberation Army General Hospital, Beijing 100048, China
| |
Collapse
|
7
|
Gutierrez Becker B, Arcadu F, Thalhammer A, Gamez Serna C, Feehan O, Drawnel F, Oh YS, Prunotto M. Training and deploying a deep learning model for endoscopic severity grading in ulcerative colitis using multicenter clinical trial data. Ther Adv Gastrointest Endosc 2021; 14:2631774521990623. [PMID: 33718871 PMCID: PMC7917417 DOI: 10.1177/2631774521990623] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 01/03/2021] [Indexed: 11/16/2022] Open
Abstract
Introduction: The Mayo Clinic Endoscopic Subscore is a commonly used grading system to assess the severity of ulcerative colitis. Correctly grading colonoscopies using the Mayo Clinic Endoscopic Subscore is a challenging task, with suboptimal rates of interrater and intrarater variability observed even among experienced and sufficiently trained experts. In recent years, several machine learning algorithms have been proposed in an effort to improve the standardization and reproducibility of Mayo Clinic Endoscopic Subscore grading. Methods: Here we propose an end-to-end fully automated system based on deep learning to predict a binary version of the Mayo Clinic Endoscopic Subscore directly from raw colonoscopy videos. Differently from previous studies, the proposed method mimics the assessment done in practice by a gastroenterologist, that is, traversing the whole colonoscopy video, identifying visually informative regions and computing an overall Mayo Clinic Endoscopic Subscore. The proposed deep learning–based system has been trained and deployed on raw colonoscopies using Mayo Clinic Endoscopic Subscore ground truth provided only at the colon section level, without manually selecting frames driving the severity scoring of ulcerative colitis. Results and Conclusion: Our evaluation on 1672 endoscopic videos obtained from a multisite data set obtained from the etrolizumab Phase II Eucalyptus and Phase III Hickory and Laurel clinical trials, show that our proposed methodology can grade endoscopic videos with a high degree of accuracy and robustness (Area Under the Receiver Operating Characteristic Curve = 0.84 for Mayo Clinic Endoscopic Subscore ⩾ 1, 0.85 for Mayo Clinic Endoscopic Subscore ⩾ 2 and 0.85 for Mayo Clinic Endoscopic Subscore ⩾ 3) and reduced amounts of manual annotation. Plain language summary Patient, caregiver and provider thoughts on educational materials about prescribing and medication safety Artificial intelligence can be used to automatically assess full endoscopic videos and estimate the severity of ulcerative colitis. In this work, we present an artificial intelligence algorithm for the automatic grading of ulcerative colitis in full endoscopic videos. Our artificial intelligence models were trained and evaluated on a large and diverse set of colonoscopy videos obtained from concluded clinical trials. We demonstrate not only that artificial intelligence is able to accurately grade full endoscopic videos, but also that using diverse data sets obtained from multiple sites is critical to train robust AI models that could potentially be deployed on real-world data.
Collapse
Affiliation(s)
- Benjamin Gutierrez Becker
- Roche Pharma Research and Early Development Informatics, Roche Innovation Center Basel, Basel, Switzerland
| | - Filippo Arcadu
- Roche Pharma Research and Early Development Informatics, Roche Innovation Center Basel, Basel, Switzerland
| | - Andreas Thalhammer
- Roche Pharma Research and Early Development Informatics, Roche Innovation Center Basel, Basel, Switzerland
| | - Citlalli Gamez Serna
- Roche Pharma Research and Early Development Informatics, Roche Innovation Center Basel, Basel, Switzerland
| | - Owen Feehan
- Roche Pharma Research and Early Development Informatics, Roche Innovation Center Basel, Basel, Switzerland
| | - Faye Drawnel
- Roche Personalized Healthcare, Genentech, Inc., South San Francisco, CA, USA
| | - Young S Oh
- Product Development, Genentech, Inc., 1 DNA Way, South San Francisco, CA, USA
| | - Marco Prunotto
- School of Pharmaceutical Sciences, Institute of Pharmaceutical Sciences of Western Switzerland, University of Geneva, Rue Michel-Servet 1, 1211 Geneva 4, Switzerland. Immunology, Infectious Disease & Ophthalmology, Roche, Basel, Switzerland
| |
Collapse
|
8
|
Jani KK, Srivastava R. A Survey on Medical Image Analysis in Capsule Endoscopy. Curr Med Imaging 2020; 15:622-636. [PMID: 32008510 DOI: 10.2174/1573405614666181102152434] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2018] [Revised: 10/14/2018] [Accepted: 10/22/2018] [Indexed: 02/06/2023]
Abstract
BACKGROUND AND OBJECTIVE Capsule Endoscopy (CE) is a non-invasive, patient-friendly alternative to conventional endoscopy procedure. However, CE produces 6 to 8 hrs long video posing a tedious challenge to a gastroenterologist for abnormality detection. Major challenges to an expert are lengthy videos, need of constant concentration and subjectivity of the abnormality. To address these challenges along with high diagnostic accuracy, design and development of automated abnormality detection system is a must. Machine learning and computer vision techniques are devised to develop such automated systems. METHODS Study presents a review of quality research papers published in IEEE, Scopus, and Science Direct database with search criteria as capsule endoscopy, engineering, and journal papers. The initial search retrieved 144 publications. After evaluating all articles, 62 publications pertaining to image analysis are selected. RESULTS This paper presents a rigorous review comprising all the aspects of medical image analysis concerning capsule endoscopy namely video summarization and redundant image elimination, Image enhancement and interpretation, segmentation and region identification, Computer-aided abnormality detection in capsule endoscopy, Image and video compression. The study provides a comparative analysis of various approaches, experimental setup, performance, strengths, and limitations of the aspects stated above. CONCLUSIONS The analyzed image analysis techniques for capsule endoscopy have not yet overcome all current challenges mainly due to lack of dataset and complex nature of the gastrointestinal tract.
Collapse
Affiliation(s)
- Kuntesh Ketan Jani
- Computer Science and Engineering Department, Indian Institute of Technology (Banaras Hindu University) Varanasi, Varanasi, Uttar Pradesh, India
| | - Rajeev Srivastava
- Computer Science and Engineering Department, Indian Institute of Technology (Banaras Hindu University) Varanasi, Varanasi, Uttar Pradesh, India
| |
Collapse
|
9
|
Transfer learning for informative-frame selection in laryngoscopic videos through learned features. Med Biol Eng Comput 2020; 58:1225-1238. [DOI: 10.1007/s11517-020-02127-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Accepted: 01/07/2020] [Indexed: 02/06/2023]
|
10
|
Ashour AS, Dey N, Mohamed WS, Tromp JG, Sherratt RS, Shi F, Moraru L. Colored Video Analysis in Wireless Capsule Endoscopy: A Survey of State-of-the-Art. Curr Med Imaging 2020; 16:1074-1084. [PMID: 32107996 DOI: 10.2174/1573405616666200124140915] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 11/28/2019] [Accepted: 12/23/2019] [Indexed: 12/15/2022]
Abstract
Wireless Capsule Endoscopy (WCE) is a highly promising technology for gastrointestinal (GI) tract abnormality diagnosis. However, low image resolution and low frame rates are challenging issues in WCE. In addition, the relevant frames containing the features of interest for accurate diagnosis only constitute 1% of the complete video information. For these reasons, analyzing the WCE videos is still a time consuming and laborious examination for the gastroenterologists, which reduces WCE system usability. This leads to the emergent need to speed-up and automates the WCE video process for GI tract examinations. Consequently, the present work introduced the concept of WCE technology, including the structure of WCE systems, with a focus on the medical endoscopy video capturing process using image sensors. It discussed also the significant characteristics of the different GI tract for effective feature extraction. Furthermore, video approaches for bleeding and lesion detection in the WCE video were reported with computer-aided diagnosis systems in different applications to support the gastroenterologist in the WCE video analysis. In image enhancement, WCE video review time reduction is also discussed, while reporting the challenges and future perspectives, including the new trend to employ the deep learning models for feature Learning, polyp recognition, and classification, as a new opportunity for researchers to develop future WCE video analysis techniques.
Collapse
Affiliation(s)
- Amira S Ashour
- Department of Electronics and Electrical Communications Engineering, Faculty of Engineering, Tanta University, Tanta, 31527, Egypt
| | - Nilanjan Dey
- Department of Information Technology, Techno India College of Technology, West Bengal, 740000, India
| | - Waleed S Mohamed
- Department of Internal Medicine, Faculty of Medicine, Tanta University, Tanta, 31527, Egypt
| | - Jolanda G Tromp
- Computer Science Department, Center for Visualization and Simulation, Duy Tan University, Da Nang, Vietnam
| | - R Simon Sherratt
- Department of Biomedical Engineering, University of Reading, Reading, Berkshire, United Kingdom
| | - Fuqian Shi
- Rutgers Cancer Institute of New Jersey, Rutgers University, New Brunswick, New Jersey, 08903, Egypt
| | - Luminița Moraru
- Faculty of Sciences and Environment, Dunarea de Jos University of Galati, Galati, Romania
| |
Collapse
|
11
|
Rasti P, Wolf C, Dorez H, Sablong R, Moussata D, Samiei S, Rousseau D. Machine Learning-Based Classification of the Health State of Mice Colon in Cancer Study from Confocal Laser Endomicroscopy. Sci Rep 2019; 9:20010. [PMID: 31882817 PMCID: PMC6934609 DOI: 10.1038/s41598-019-56583-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Accepted: 12/09/2019] [Indexed: 01/26/2023] Open
Abstract
In this article, we address the problem of the classification of the health state of the colon's wall of mice, possibly injured by cancer with machine learning approaches. This problem is essential for translational research on cancer and is a priori challenging since the amount of data is usually limited in all preclinical studies for practical and ethical reasons. Three states considered including cancer, health, and inflammatory on tissues. Fully automated machine learning-based methods are proposed, including deep learning, transfer learning, and shallow learning with SVM. These methods addressed different training strategies corresponding to clinical questions such as the automatic clinical state prediction on unseen data using a pre-trained model, or in an alternative setting, real-time estimation of the clinical state of individual tissue samples during the examination. Experimental results show the best performance of 99.93% correct recognition rate obtained for the second strategy as well as the performance of 98.49% which were achieved for the more difficult first case.
Collapse
Affiliation(s)
- Pejman Rasti
- Laboratoire Angevin de Recherche en Ingénierie des Systèmes (LARIS), UMR INRA IRHS, Université d'Angers, Angers, 49000, France
| | - Christian Wolf
- INSA-Lyon, INRIA, LIRIS, CITI, CNRS, Villeurbanne, France
| | - Hugo Dorez
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, Lyon, 69621, France
| | - Raphael Sablong
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, Lyon, 69621, France
| | - Driffa Moussata
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, Lyon, 69621, France
| | - Salma Samiei
- Laboratoire Angevin de Recherche en Ingénierie des Systèmes (LARIS), UMR INRA IRHS, Université d'Angers, Angers, 49000, France
| | - David Rousseau
- Laboratoire Angevin de Recherche en Ingénierie des Systèmes (LARIS), UMR INRA IRHS, Université d'Angers, Angers, 49000, France.
| |
Collapse
|
12
|
Abstract
Bronchoscopy enables many minimally invasive chest procedures for diseases such as lung cancer and asthma. Guided by the bronchoscope's video stream, a physician can navigate the complex three-dimensional (3-D) airway tree to collect tissue samples or administer a disease treatment. Unfortunately, physicians currently discard procedural video because of the overwhelming amount of data generated. Hence, they must rely on memory and anecdotal snapshots to document a procedure. We propose a robust automatic method for summarizing an endobronchial video stream. Inspired by the multimedia concept of the video summary and by research in other endoscopy domains, our method consists of three main steps: 1) shot segmentation, 2) motion analysis, and 3) keyframe selection. Overall, the method derives a true hierarchical decomposition, consisting of a shot set and constituent keyframe set, for a given procedural video. No other method to our knowledge gives such a structured summary for the raw, unscripted, unedited videos arising in endoscopy. Results show that our method more efficiently covers the observed endobronchial regions than other keyframe-selection approaches and is robust to parameter variations. Over a wide range of video sequences, our method required on average only 6.5% of available video frames to achieve a video coverage = 92.7%. We also demonstrate how the derived video summary facilitates direct fusion with a patient's 3-D chest computed-tomography scan in a system under development, thereby enabling efficient video browsing and retrieval through the complex airway tree.
Collapse
|
13
|
Akbari M, Mohrekesh M, Rafiei S, Reza Soroushmehr SM, Karimi N, Samavi S, Najarian K. Classification of Informative Frames in Colonoscopy Videos Using Convolutional Neural Networks with Binarized Weights. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:65-68. [PMID: 30440342 DOI: 10.1109/embc.2018.8512226] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Colorectal cancer is one of the common cancers in the United States. Polyps are one of the major causes of colonic cancer, and early detection of polyps will increase the chance of cancer treatments. In this paper, we propose a novel classification of informative frames based on a convolutional neural network with binarized weights. The proposed CNN is trained with colonoscopy frames along with the labels of the frames as input data. We also used binarized weights and kernels to reduce the size of CNN and make it suitable for implementation in medical hardware. We evaluate our proposed method using Asu Mayo Test clinic database, which contains colonoscopy videos of different patients. Our proposed method reaches a dice score of 71.20% and accuracy of more than 90% using the mentioned dataset.
Collapse
|
14
|
Moccia S, Vanone GO, Momi ED, Laborai A, Guastini L, Peretti G, Mattos LS. Learning-based classification of informative laryngoscopic frames. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 158:21-30. [PMID: 29544787 DOI: 10.1016/j.cmpb.2018.01.030] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Revised: 12/18/2017] [Accepted: 01/29/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Early-stage diagnosis of laryngeal cancer is of primary importance to reduce patient morbidity. Narrow-band imaging (NBI) endoscopy is commonly used for screening purposes, reducing the risks linked to a biopsy but at the cost of some drawbacks, such as large amount of data to review to make the diagnosis. The purpose of this paper is to present a strategy to perform automatic selection of informative endoscopic video frames, which can reduce the amount of data to process and potentially increase diagnosis performance. METHODS A new method to classify NBI endoscopic frames based on intensity, keypoint and image spatial content features is proposed. Support vector machines with the radial basis function and the one-versus-one scheme are used to classify frames as informative, blurred, with saliva or specular reflections, or underexposed. RESULTS When tested on a balanced set of 720 images from 18 different laryngoscopic videos, a classification recall of 91% was achieved for informative frames, significantly overcoming three state of the art methods (Wilcoxon rank-signed test, significance level = 0.05). CONCLUSIONS Due to the high performance in identifying informative frames, the approach is a valuable tool to perform informative frame selection, which can be potentially applied in different fields, such us computer-assisted diagnosis and endoscopic view expansion.
Collapse
Affiliation(s)
- Sara Moccia
- Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Milan, Italy; Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy.
| | - Gabriele O Vanone
- Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Elena De Momi
- Department of Electronics, Information, and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Andrea Laborai
- Department of Otorhinolaryngology, Head and Neck Surgery, University of Genoa, Genoa, Italy
| | - Luca Guastini
- Department of Otorhinolaryngology, Head and Neck Surgery, University of Genoa, Genoa, Italy
| | - Giorgio Peretti
- Department of Otorhinolaryngology, Head and Neck Surgery, University of Genoa, Genoa, Italy
| | - Leonardo S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
15
|
Yung DE, Rondonotti E, Koulaouzidis A. Review: capsule colonoscopy-a concise clinical overview of current status. ANNALS OF TRANSLATIONAL MEDICINE 2016; 4:398. [PMID: 27867950 DOI: 10.21037/atm.2016.10.71] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
The colon capsule endoscopy (CCE) was first introduced in 2007. Currently, the main clinical indications for CCE are completion of incomplete colonoscopy, polyp detection and investigation of inflammatory bowel disease (IBD). Although conventional colonoscopy is the gold standard in bowel cancer screening, incomplete colonoscopy remains a problem as lesions are missed. CCE compares favourably to computer tomography colonography (CTC) in adenoma detection and has therefore been proposed as a method for completing colonoscopy. However the data on CCE remains sparse and current evidence does not show its superiority over CTC or conventional colonoscopy in bowel cancer screening. CCE also seems to show good correlation with conventional colonoscopy when used to evaluate IBD, but there are not many published studies at present. Other significant limitations include the need for aggressive bowel preparation and the labour-intensiveness of CCE reading. Therefore, much further software and hardware development is required to enable CCE to fulfill its potential as a minimally-invasive and reliable method of colonoscopy.
Collapse
Affiliation(s)
- Diana E Yung
- Endoscopy Unit, the Royal Infirmary of Edinburgh, Edinburgh, UK
| | | | | |
Collapse
|
16
|
Gadermayr M, Uhl A, Vécsei A. Fully automated decision support systems for celiac disease diagnosis. Ing Rech Biomed 2016. [DOI: 10.1016/j.irbm.2015.09.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
17
|
Keuchel M, Kurniawan N, Baltes P, Bandorski D, Koulaouzidis A. Quantitative measurements in capsule endoscopy. Comput Biol Med 2015; 65:333-47. [PMID: 26299419 DOI: 10.1016/j.compbiomed.2015.07.016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2015] [Revised: 07/16/2015] [Accepted: 07/17/2015] [Indexed: 12/14/2022]
Abstract
This review summarizes several approaches for quantitative measurement in capsule endoscopy. Video capsule endoscopy (VCE) typically provides wireless imaging of small bowel. Currently, a variety of quantitative measurements are implemented in commercially available hardware/software. The majority is proprietary and hence undisclosed algorithms. Measurement of amount of luminal contamination allows calculating scores from whole VCE studies. Other scores express the severity of small bowel lesions in Crohn׳s disease or the degree of villous atrophy in celiac disease. Image processing with numerous algorithms of textural and color feature extraction is further in the research focuses for automated image analysis. These tools aim to select single images with relevant lesions as blood, ulcers, polyps and tumors or to omit images showing only luminal contamination. Analysis of motility pattern, size measurement and determination of capsule localization are additional topics. Non-visual wireless capsules transmitting data acquired with specific sensors from the gastrointestinal (GI) tract are available for clinical routine. This includes pH measurement in the esophagus for the diagnosis of acid gastro-esophageal reflux. A wireless motility capsule provides GI motility analysis on the basis of pH, pressure, and temperature measurement. Electromagnetically tracking of another motility capsule allows visualization of motility. However, measurement of substances by GI capsules is of great interest but still at an early stage of development.
Collapse
Affiliation(s)
- M Keuchel
- Clinic for Internal Medicine, Bethesda Krankenhaus Bergedorf, Glindersweg 80, 21029 Hamburg, Germany.
| | - N Kurniawan
- Clinic for Internal Medicine, Bethesda Krankenhaus Bergedorf, Glindersweg 80, 21029 Hamburg, Germany
| | - P Baltes
- Clinic for Internal Medicine, Bethesda Krankenhaus Bergedorf, Glindersweg 80, 21029 Hamburg, Germany
| | | | | |
Collapse
|
18
|
Ben Ismail MM, Bchir O. Endoscopy video summarisation using novel relational motion histogram descriptor and semi-supervised clustering. J EXP THEOR ARTIF IN 2015. [DOI: 10.1080/0952813x.2015.1020623] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
19
|
A GPU Accelerated Algorithm for Blood Detection inWireless Capsule Endoscopy Images. LECTURE NOTES IN COMPUTATIONAL VISION AND BIOMECHANICS 2015. [DOI: 10.1007/978-3-319-13407-9_4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
20
|
Video summarization based tele-endoscopy: a service to efficiently manage visual data generated during wireless capsule endoscopy procedure. J Med Syst 2014; 38:109. [PMID: 25037715 DOI: 10.1007/s10916-014-0109-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2014] [Accepted: 07/07/2014] [Indexed: 01/17/2023]
Abstract
Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use. More importantly, WCE combined with mobile computing ensures rapid transmission of diagnostic data to hospitals and enables off-site senior gastroenterologists to offer timely decision making support. However, during this WCE process, video data are produced in huge amounts, but only a limited amount of data is actually useful for diagnosis. The sharing and analysis of this video data becomes a challenging task due the constraints such as limited memory, energy, and communication capability. In order to facilitate efficient WCE data collection and browsing tasks, we present a video summarization-based tele-endoscopy service that estimates the semantically relevant video frames from the perspective of gastroenterologists. For this purpose, image moments, curvature, and multi-scale contrast are computed and are fused to obtain the saliency map of each frame. This saliency map is used to select keyframes. The proposed tele-endoscopy service selects keyframes based on their relevance to the disease diagnosis. This ensures the sending of diagnostically relevant frames to the gastroenterologist instead of sending all the data, thus saving transmission costs and bandwidth. The proposed framework also saves storage costs as well as the precious time of doctors in browsing patient's information. The qualitative and quantitative results are encouraging and show that the proposed service provides video keyframes to the gastroenterologists without discarding important information.
Collapse
|
21
|
Inter-operative trajectory registration for endoluminal video synchronization: application to biopsy site re-localization. ACTA ACUST UNITED AC 2014. [PMID: 24505688 DOI: 10.1007/978-3-642-40811-3_47] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
Abstract
The screening of oesophageal adenocarcinoma involves obtaining biopsies at different regions along the oesophagus. The localization and tracking of these biopsy sites inter-operatively poses a significant challenge for providing targeted treatments. This paper presents a novel framework for providing a guided navigation to the gastro-intestinal specialist for accurate re-positioning of the endoscope at previously targeted sites. Firstly, we explain our approach for the application of electromagnetic tracking in acheiving this objective. Then, we show on three in-vivo porcine interventions that our system can provide accurate guidance information, which was qualitatively evaluated by five experts.
Collapse
|
22
|
Gadermayr M, Uhl A, Vécsei A. Quality Based Information Fusion in Fully Automatized Celiac Disease Diagnosis. LECTURE NOTES IN COMPUTER SCIENCE 2014. [DOI: 10.1007/978-3-319-11752-2_55] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
23
|
Figueiredo IN, Kumar S, Leal C, Figueiredo PN. Computer-assisted bleeding detection in wireless capsule endoscopy images. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION 2013. [DOI: 10.1080/21681163.2013.796164] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
24
|
Seguí S, Drozdzal M, Vilariño F, Malagelada C, Azpiroz F, Radeva P, Vitrià J. Categorization and segmentation of intestinal content frames for wireless capsule endoscopy. IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE : A PUBLICATION OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY 2012; 16:1341-1352. [PMID: 24218705 DOI: 10.1109/titb.2012.2221472] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Wireless capsule endoscopy (WCE) is a device that allows the direct visualization of gastrointestinal tract with minimal discomfort for the patient, but at the price of a large amount of time for screening. In order to reduce this time, several works have proposed to automatically remove all the frames showing intestinal content. These methods label frames as {intestinal content- clear} without discriminating between types of content (with different physiological meaning) or the portion of image covered. In addition, since the presence of intestinal content has been identified as an indicator of intestinal motility, its accurate quantification can show a potential clinical relevance. In this paper, we present a method for the robust detection and segmentation of intestinal content in WCE images, together with its further discrimination between turbid liquid and bubbles. Our proposal is based on a twofold system. First, frames presenting intestinal content are detected by a support vector machine classifier using color and textural information. Second, intestinal content frames are segmented into {turbid, bubbles, and clear} regions. We show a detailed validation using a large dataset. Our system outperforms previous methods and, for the first time, discriminates between turbid from bubbles media.
Collapse
|
25
|
Ciaccio EJ, Tennyson CA, Bhagat G, Lewis SK, Green PH. Quantitative estimates of motility from videocapsule endoscopy are useful to discern celiac patients from controls. Dig Dis Sci 2012. [PMID: 22644741 DOI: 10.1007/s1062001222251] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
BACKGROUND Prior work has shown that videocapsule endoscopy image features are a useful tool for quantitatively distinguishing the intestinal mucosal surface of untreated celiac patients from that of controls. The use of dynamic estimates of wall motility may further help to improve classification. METHODS Videocapsule endoscopy clips (200 frames each, 2 frames/s, 576 × 576 pixels/frame) were acquired at five small intestinal locations in 11 untreated celiac patients (celiacs) and ten controls. Color images were converted to grayscale and analyzed frame-by-frame. Variations in the position and width of the center of the small intestinal lumen were quantitatively estimated. The darkest grayscale pixels were used as an estimate of the lumen center. Over 200 frames, the standard deviation of the lumen center xy position and the mean and standard deviation in lumen center width were used as dynamic estimates of wall motility. These parameters were plotted in three-dimensional space, and the best discriminant function was used to classify celiacs versus controls at each of the following five locations: (1) duodenal bulb, (2) distal duodenum, (3) jejunum, (4) ileum, and (5) distal ileum. RESULTS The overall sensitivity for the classification of celiacs versus controls at all five locations was 98.2 %, while the specificity was 96.0 %. From location 1 to 5, there was a tendency for the lumen center width to diminish in terms of frame-to-frame variability by 7.6 % in celiacs (r (2) = 0.4) and 9.7 % in controls (r (2) = 0.7). CONCLUSIONS In addition to examining the mucosal surface, videocapsule endoscopy can assess small bowel intestinal motility and aid in distinguishing celiac patients from controls.
Collapse
Affiliation(s)
- Edward J Ciaccio
- Department of Medicine, Celiac Disease Center, Columbia University, Harkness Pavilion 804, 180 Fort Washington Avenue, New York, NY 10032, USA.
| | | | | | | | | |
Collapse
|
26
|
Ciaccio EJ, Tennyson CA, Bhagat G, Lewis SK, Green PH. Quantitative estimates of motility from videocapsule endoscopy are useful to discern celiac patients from controls. Dig Dis Sci 2012; 57:2936-43. [PMID: 22644741 DOI: 10.1007/s10620-012-2225-1] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/27/2011] [Accepted: 04/30/2012] [Indexed: 12/16/2022]
Abstract
BACKGROUND Prior work has shown that videocapsule endoscopy image features are a useful tool for quantitatively distinguishing the intestinal mucosal surface of untreated celiac patients from that of controls. The use of dynamic estimates of wall motility may further help to improve classification. METHODS Videocapsule endoscopy clips (200 frames each, 2 frames/s, 576 × 576 pixels/frame) were acquired at five small intestinal locations in 11 untreated celiac patients (celiacs) and ten controls. Color images were converted to grayscale and analyzed frame-by-frame. Variations in the position and width of the center of the small intestinal lumen were quantitatively estimated. The darkest grayscale pixels were used as an estimate of the lumen center. Over 200 frames, the standard deviation of the lumen center xy position and the mean and standard deviation in lumen center width were used as dynamic estimates of wall motility. These parameters were plotted in three-dimensional space, and the best discriminant function was used to classify celiacs versus controls at each of the following five locations: (1) duodenal bulb, (2) distal duodenum, (3) jejunum, (4) ileum, and (5) distal ileum. RESULTS The overall sensitivity for the classification of celiacs versus controls at all five locations was 98.2 %, while the specificity was 96.0 %. From location 1 to 5, there was a tendency for the lumen center width to diminish in terms of frame-to-frame variability by 7.6 % in celiacs (r (2) = 0.4) and 9.7 % in controls (r (2) = 0.7). CONCLUSIONS In addition to examining the mucosal surface, videocapsule endoscopy can assess small bowel intestinal motility and aid in distinguishing celiac patients from controls.
Collapse
Affiliation(s)
- Edward J Ciaccio
- Department of Medicine, Celiac Disease Center, Columbia University, Harkness Pavilion 804, 180 Fort Washington Avenue, New York, NY 10032, USA.
| | | | | | | | | |
Collapse
|
27
|
Atasoy S, Mateus D, Meining A, Yang GZ, Navab N. Endoscopic video manifolds for targeted optical biopsy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:637-53. [PMID: 22057050 DOI: 10.1109/tmi.2011.2174252] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Gastro-intestinal (GI) endoscopy is a widely used clinical procedure for screening and surveillance of digestive tract diseases ranging from Barrett's Oesophagus to oesophageal cancer. Current surveillance protocol consists of periodic endoscopic examinations performed in 3-4 month intervals including expert's visual assessment and biopsies taken from suspicious tissue regions. Recent development of a new imaging technology, called probe-based confocal laser endomicroscopy (pCLE), enabled the acquisition of in vivo optical biopsies without removing any tissue sample. Besides its several advantages, i.e., noninvasiveness, real-time and in vivo feedback, optical biopsies involve a new challenge for the endoscopic expert. Due to their noninvasive nature, optical biopsies do not leave any scar on the tissue and therefore recognition of the previous optical biopsy sites in surveillance endoscopy becomes very challenging. In this work, we introduce a clustering and classification framework to facilitate retargeting previous optical biopsy sites in surveillance upper GI-endoscopies. A new representation of endoscopic videos based on manifold learning, "endoscopic video manifolds" (EVMs), is proposed. The low dimensional EVM representation is adapted to facilitate two different clustering tasks; i.e., clustering of informative frames and patient specific endoscopic segments, only by changing the similarity measure. Each step of the proposed framework is validated on three in vivo patient datasets containing 1834, 3445, and 1546 frames, corresponding to endoscopic videos of 73.36, 137.80, and 61.84 s, respectively. Improvements achieved by the introduced EVM representation are demonstrated by quantitative analysis in comparison to the original image representation and principal component analysis. Final experiments evaluating the complete framework demonstrate the feasibility of the proposed method as a promising step for assisting the endoscopic expert in retargeting the optical biopsy sites.
Collapse
Affiliation(s)
- Selen Atasoy
- Chair for Computer Aided Medical Procedures, Technische Universität München, München, Germany.
| | | | | | | | | |
Collapse
|
28
|
Prasath VBS, Figueiredo IN, Figueiredo PN, Palaniappan K. Mucosal region detection and 3D reconstruction in wireless capsule endoscopy videos using active contours. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2012; 2012:4014-4017. [PMID: 23366808 DOI: 10.1109/embc.2012.6346847] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Wireless capsule endoscopy (WCE) provides an inner view of the human digestive system. The inner tubular like structure of the intestinal tract consists of two major regions: lumen - intermediate region where the capsule moves, mucosa - membrane lining the lumen cavities. We study the use of the Split Bregman version of the extended active contour model of Chan and Vese for segmenting mucosal regions in WCE videos. Utilizing this segmentation we obtain a 3D reconstruction of the mucosal tissues using a near source perspective shape-from-shading (SfS) technique. Numerical results indicate that the active contour based segmentation provides better segmentations compared to previous methods and in turn gives better 3D reconstructions of mucosal regions.
Collapse
Affiliation(s)
- V B Surya Prasath
- Department of Computer Science, University of Missouri-Columbia, Columbia, MO 65211, USA.
| | | | | | | |
Collapse
|
29
|
Drozdzal M, Seguí S, Malagelada C, Azpiroz F, Vitrià J, Radeva P. Interactive Labeling of WCE Images. PATTERN RECOGNITION AND IMAGE ANALYSIS 2011. [DOI: 10.1007/978-3-642-21257-4_18] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
30
|
Abstract
Postprocedural analysis of gastrointestinal (GI) endoscopic videos is a difficult task because the videos often suffer from a large number of poor-quality frames due to the motion or out-of-focus blur, specular highlights and artefacts caused by turbid fluid inside the GI tract. Clinically, each frame of the video is examined individually by the endoscopic expert due to the lack of a suitable visualisation technique. In this work, we introduce a low dimensional representation of endoscopic videos based on a manifold learning approach. The introduced endoscopic video manifolds (EVMs) enable the clustering of poor-quality frames and grouping of different segments of the GI endoscopic video in an unsupervised manner to facilitate subsequent visual assessment. In this paper, we present two novel inter-frame similarity measures for manifold learning to create structured manifolds from complex endoscopic videos. Our experiments demonstrate that the proposed method yields high precision and recall values for uninformative frame detection (90.91% and 82.90%) and results in well-structured manifolds for scene clustering.
Collapse
|