1
|
Chen S, Zhang Z. A Semi-Automatic Magnetic Resonance Imaging Annotation Algorithm Based on Semi-Weakly Supervised Learning. SENSORS (BASEL, SWITZERLAND) 2024; 24:3893. [PMID: 38931677 PMCID: PMC11207229 DOI: 10.3390/s24123893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 06/07/2024] [Accepted: 06/11/2024] [Indexed: 06/28/2024]
Abstract
The annotation of magnetic resonance imaging (MRI) images plays an important role in deep learning-based MRI segmentation tasks. Semi-automatic annotation algorithms are helpful for improving the efficiency and reducing the difficulty of MRI image annotation. However, the existing semi-automatic annotation algorithms based on deep learning have poor pre-annotation performance in the case of insufficient segmentation labels. In this paper, we propose a semi-automatic MRI annotation algorithm based on semi-weakly supervised learning. In order to achieve a better pre-annotation performance in the case of insufficient segmentation labels, semi-supervised and weakly supervised learning were introduced, and a semi-weakly supervised learning segmentation algorithm based on sparse labels was proposed. In addition, in order to improve the contribution rate of a single segmentation label to the performance of the pre-annotation model, an iterative annotation strategy based on active learning was designed. The experimental results on public MRI datasets show that the proposed algorithm achieved an equivalent pre-annotation performance when the number of segmentation labels was much less than that of the fully supervised learning algorithm, which proves the effectiveness of the proposed algorithm.
Collapse
Affiliation(s)
- Shaolong Chen
- School of Sino-German Intelligent Manufacturing, Shenzhen City Polytechnic, Shenzhen 518000, China;
- School of Electronics and Communication Engineering, Sun Yat-Sen University, Shenzhen 518000, China
| | - Zhiyong Zhang
- School of Electronics and Communication Engineering, Sun Yat-Sen University, Shenzhen 518000, China
| |
Collapse
|
2
|
Sahafi A, Koulaouzidis A, Lalinia M. Polypoid Lesion Segmentation Using YOLO-V8 Network in Wireless Video Capsule Endoscopy Images. Diagnostics (Basel) 2024; 14:474. [PMID: 38472946 DOI: 10.3390/diagnostics14050474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 01/26/2024] [Accepted: 02/19/2024] [Indexed: 03/14/2024] Open
Abstract
Gastrointestinal (GI) tract disorders are a significant public health issue. They are becoming more common and can cause serious health problems and high healthcare costs. Small bowel tumours (SBTs) and colorectal cancer (CRC) are both becoming more prevalent, especially among younger adults. Early detection and removal of polyps (precursors of malignancy) is essential for prevention. Wireless Capsule Endoscopy (WCE) is a procedure that utilises swallowable camera devices that capture images of the GI tract. Because WCE generates a large number of images, automated polyp segmentation is crucial. This paper reviews computer-aided approaches to polyp detection using WCE imagery and evaluates them using a dataset of labelled anomalies and findings. The study focuses on YOLO-V8, an improved deep learning model, for polyp segmentation and finds that it performs better than existing methods, achieving high precision and recall. The present study underscores the potential of automated detection systems in improving GI polyp identification.
Collapse
Affiliation(s)
- Ali Sahafi
- Department of Mechanical and Electrical Engineering, Digital and High-Frequency Electronics Section, University of Southern Denmark, 5230 Odense, Denmark
| | - Anastasios Koulaouzidis
- Surgical Research Unit, Odense University Hospital, 5000 Svendborg, Denmark
- Department of Clinical Research, University of Southern Denmark, 5230 Odense, Denmark
- Department of Medicine, OUH Svendborg Sygehus, 5700 Svendborg, Denmark
- Department of Social Medicine and Public Health, Pomeranian Medical University, 70204 Szczecin, Poland
| | - Mehrshad Lalinia
- Department of Mechanical and Electrical Engineering, Digital and High-Frequency Electronics Section, University of Southern Denmark, 5230 Odense, Denmark
| |
Collapse
|
3
|
Müller L, Tibyampansha D, Mildenberger P, Panholzer T, Jungmann F, Halfmann MC. Convolutional neural network-based kidney volume estimation from low-dose unenhanced computed tomography scans. BMC Med Imaging 2023; 23:187. [PMID: 37968580 PMCID: PMC10648730 DOI: 10.1186/s12880-023-01142-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Accepted: 10/27/2023] [Indexed: 11/17/2023] Open
Abstract
PURPOSE Kidney volume is important in the management of renal diseases. Unfortunately, the currently available, semi-automated kidney volume determination is time-consuming and prone to errors. Recent advances in its automation are promising but mostly require contrast-enhanced computed tomography (CT) scans. This study aimed at establishing an automated estimation of kidney volume in non-contrast, low-dose CT scans of patients with suspected urolithiasis. METHODS The kidney segmentation process was automated with 2D Convolutional Neural Network (CNN) models trained on manually segmented 2D transverse images extracted from low-dose, unenhanced CT scans of 210 patients. The models' segmentation accuracy was assessed using Dice Similarity Coefficient (DSC), for the overlap with manually-generated masks on a set of images not used in the training. Next, the models were applied to 22 previously unseen cases to segment kidney regions. The volume of each kidney was calculated from the product of voxel number and their volume in each segmented mask. Kidney volume results were then validated against results semi-automatically obtained by radiologists. RESULTS The CNN-enabled kidney volume estimation took a mean of 32 s for both kidneys in a CT scan with an average of 1026 slices. The DSC was 0.91 and 0.86 and for left and right kidneys, respectively. Inter-rater variability had consistencies of ICC = 0.89 (right), 0.92 (left), and absolute agreements of ICC = 0.89 (right), 0.93 (left) between the CNN-enabled and semi-automated volume estimations. CONCLUSION In our work, we demonstrated that CNN-enabled kidney volume estimation is feasible and highly reproducible in low-dose, non-enhanced CT scans. Automatic segmentation can thereby quantitatively enhance radiological reports.
Collapse
Affiliation(s)
- Lukas Müller
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg University Mainz, Langenbeckst, 1, 55131, Mainz, Germany
| | - Dativa Tibyampansha
- Institute of Medical Biostatistics, Epidemiology and Informatics, University Medical Center of the Johannes Gutenberg University Mainz, Obere Zahlbacher Str. 69, 55131, Mainz, Germany
| | - Peter Mildenberger
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg University Mainz, Langenbeckst, 1, 55131, Mainz, Germany
| | - Torsten Panholzer
- Institute of Medical Biostatistics, Epidemiology and Informatics, University Medical Center of the Johannes Gutenberg University Mainz, Obere Zahlbacher Str. 69, 55131, Mainz, Germany
| | - Florian Jungmann
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg University Mainz, Langenbeckst, 1, 55131, Mainz, Germany
| | - Moritz C Halfmann
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg University Mainz, Langenbeckst, 1, 55131, Mainz, Germany.
| |
Collapse
|
4
|
Basalamah S, Felemban E, Khan SD, Naseer A, Rehman FU. Deep Learning Framework For Congestion Detection at Public Places Via Learning From Synthetic Data. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022. [DOI: 10.1016/j.jksuci.2022.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
5
|
Magherini R, Mussi E, Volpe Y, Furferi R, Buonamici F, Servi M. Machine Learning for Renal Pathologies: An Updated Survey. SENSORS 2022; 22:s22134989. [PMID: 35808481 PMCID: PMC9269842 DOI: 10.3390/s22134989] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 06/22/2022] [Accepted: 06/29/2022] [Indexed: 12/04/2022]
Abstract
Within the literature concerning modern machine learning techniques applied to the medical field, there is a growing interest in the application of these technologies to the nephrological area, especially regarding the study of renal pathologies, because they are very common and widespread in our society, afflicting a high percentage of the population and leading to various complications, up to death in some cases. For these reasons, the authors have considered it appropriate to collect, using one of the major bibliographic databases available, and analyze the studies carried out until February 2022 on the use of machine learning techniques in the nephrological field, grouping them according to the addressed pathologies: renal masses, acute kidney injury, chronic kidney disease, kidney stone, glomerular disease, kidney transplant, and others less widespread. Of a total of 224 studies, 59 were analyzed according to inclusion and exclusion criteria in this review, considering the method used and the type of data available. Based on the study conducted, it is possible to see a growing trend and interest in the use of machine learning applications in nephrology, becoming an additional tool for physicians, which can enable them to make more accurate and faster diagnoses, although there remains a major limitation given the difficulty in creating public databases that can be used by the scientific community to corroborate and eventually make a positive contribution in this area.
Collapse
|
6
|
Aljabri M, AlAmir M, AlGhamdi M, Abdel-Mottaleb M, Collado-Mesa F. Towards a better understanding of annotation tools for medical imaging: a survey. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:25877-25911. [PMID: 35350630 PMCID: PMC8948453 DOI: 10.1007/s11042-022-12100-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 08/04/2021] [Accepted: 01/03/2022] [Indexed: 05/07/2023]
Abstract
Medical imaging refers to several different technologies that are used to view the human body to diagnose, monitor, or treat medical conditions. It requires significant expertise to efficiently and correctly interpret the images generated by each of these technologies, which among others include radiography, ultrasound, and magnetic resonance imaging. Deep learning and machine learning techniques provide different solutions for medical image interpretation including those associated with detection and diagnosis. Despite the huge success of deep learning algorithms in image analysis, training algorithms to reach human-level performance in these tasks depends on the availability of large amounts of high-quality training data, including high-quality annotations to serve as ground-truth. Different annotation tools have been developed to assist with the annotation process. In this survey, we present the currently available annotation tools for medical imaging, including descriptions of graphical user interfaces (GUI) and supporting instruments. The main contribution of this study is to provide an intensive review of the popular annotation tools and show their successful usage in annotating medical imaging dataset to guide researchers in this area.
Collapse
Affiliation(s)
- Manar Aljabri
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlAmir
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlGhamdi
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | | | - Fernando Collado-Mesa
- Department of Radiology, University of Miami Miller School of Medicine, Florida, FL USA
| |
Collapse
|
7
|
Improving Robot Perception Skills Using a Fast Image-Labelling Method with Minimal Human Intervention. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Robot perception skills contribute to natural interfaces that enhance human–robot interactions. This can be notably improved by using convolutional neural networks. To train a convolutional neural network, the labelling process is the crucial first stage, in which image objects are marked with rectangles or masks. There are many image-labelling tools, but all require human interaction to achieve good results. Manual image labelling with rectangles or masks is labor-intensive and unappealing work, which can take months to complete, making the labelling task tedious and lengthy. This paper proposes a fast method to create labelled images with minimal human intervention, which is tested with a robot perception task. Images of objects taken with specific backgrounds are quickly and accurately labelled with rectangles or masks. In a second step, detected objects can be synthesized with different backgrounds to improve the training capabilities of the image set. Experimental results show the effectiveness of this method with an example of human–robot interaction using hand fingers. This labelling method generates a database to train convolutional networks to detect hand fingers easily with minimal labelling work. This labelling method can be applied to new image sets or used to add new samples to existing labelled image sets of any application. This proposed method improves the labelling process noticeably and reduces the time required to start the training process of a convolutional neural network model.
Collapse
|
8
|
Korzynska A, Roszkowiak L, Zak J, Siemion K. A review of current systems for annotation of cell and tissue images in digital pathology. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.04.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
9
|
Beardslee LA, Banis GE, Chu S, Liu S, Chapin AA, Stine JM, Pasricha PJ, Ghodssi R. Ingestible Sensors and Sensing Systems for Minimally Invasive Diagnosis and Monitoring: The Next Frontier in Minimally Invasive Screening. ACS Sens 2020; 5:891-910. [PMID: 32157868 DOI: 10.1021/acssensors.9b02263] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Ingestible electronic systems that are capable of embedded sensing, particularly within the gastrointestinal (GI) tract and its accessory organs, have the potential to screen for diseases that are difficult if not impossible to detect at an early stage using other means. Furthermore, these devices have the potential to (1) reduce labor and facility costs for a variety of procedures, (2) promote research for discovering new biomarker targets for associated pathologies, (3) promote the development of autonomous or semiautonomous diagnostic aids for consumers, and (4) provide a foundation for epithelially targeted therapeutic interventions. These technological advances have the potential to make disease surveillance and treatment far more effective for a variety of conditions, allowing patients to lead longer and more productive lives. This review will examine the conventional techniques, as well as ingestible sensors and sensing systems that are currently under development for use in disease screening and diagnosis for GI disorders. Design considerations, fabrication, and applications will be discussed.
Collapse
Affiliation(s)
- Luke A. Beardslee
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, United States
| | - George E. Banis
- Fischell Department of Bioengineering, University of Maryland, College Park, Maryland 20742, United States
| | - Sangwook Chu
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, United States
| | - Sanwei Liu
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, United States
| | - Ashley A. Chapin
- Fischell Department of Bioengineering, University of Maryland, College Park, Maryland 20742, United States
| | - Justin M. Stine
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland 20742, United States
| | - Pankaj Jay Pasricha
- Department of Medicine, Johns Hopkins University, Baltimore, Maryland 21205, United States
| | - Reza Ghodssi
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, United States
- Fischell Department of Bioengineering, University of Maryland, College Park, Maryland 20742, United States
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland 20742, United States
| |
Collapse
|
10
|
GTCreator: a flexible annotation tool for image-based datasets. Int J Comput Assist Radiol Surg 2018; 14:191-201. [PMID: 30255462 DOI: 10.1007/s11548-018-1864-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 09/12/2018] [Indexed: 12/14/2022]
Abstract
PURPOSE Methodology evaluation for decision support systems for health is a time-consuming task. To assess performance of polyp detection methods in colonoscopy videos, clinicians have to deal with the annotation of thousands of images. Current existing tools could be improved in terms of flexibility and ease of use. METHODS We introduce GTCreator, a flexible annotation tool for providing image and text annotations to image-based datasets. It keeps the main basic functionalities of other similar tools while extending other capabilities such as allowing multiple annotators to work simultaneously on the same task or enhanced dataset browsing and easy annotation transfer aiming to speed up annotation processes in large datasets. RESULTS The comparison with other similar tools shows that GTCreator allows to obtain fast and precise annotation of image datasets, being the only one which offers full annotation editing and browsing capabilites. CONCLUSION Our proposed annotation tool has been proven to be efficient for large image dataset annotation, as well as showing potential of use in other stages of method evaluation such as experimental setup or results analysis.
Collapse
|
11
|
Weakly supervised multilabel classification for semantic interpretation of endoscopy video frames. EVOLVING SYSTEMS 2018. [DOI: 10.1007/s12530-018-9236-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
12
|
Vasilakakis MD, Iakovidis DK, Spyrou E, Chatzis D, Koulaouzidis A. Beyond Lesion Detection: Towards Semantic Interpretation of Endoscopy Videos. ACTA ACUST UNITED AC 2017. [DOI: 10.1007/978-3-319-65172-9_32] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
13
|
Ullah H, Uzair M, Ullah M, Khan A, Ahmad A, Khan W. Density independent hydrodynamics model for crowd coherency detection. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.02.023] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
14
|
Koulaouzidis A, Iakovidis DK, Yung DE, Rondonotti E, Kopylov U, Plevris JN, Toth E, Eliakim A, Wurm Johansson G, Marlicz W, Mavrogenis G, Nemeth A, Thorlacius H, Tontini GE. KID Project: an internet-based digital video atlas of capsule endoscopy for research purposes. Endosc Int Open 2017; 5:E477-E483. [PMID: 28580415 PMCID: PMC5452962 DOI: 10.1055/s-0043-105488] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2016] [Accepted: 02/06/2017] [Indexed: 12/22/2022] Open
Abstract
BACKGROUND AND AIMS Capsule endoscopy (CE) has revolutionized small-bowel (SB) investigation. Computational methods can enhance diagnostic yield (DY); however, incorporating machine learning algorithms (MLAs) into CE reading is difficult as large amounts of image annotations are required for training. Current databases lack graphic annotations of pathologies and cannot be used. A novel database, KID, aims to provide a reference for research and development of medical decision support systems (MDSS) for CE. METHODS Open-source software was used for the KID database. Clinicians contribute anonymized, annotated CE images and videos. Graphic annotations are supported by an open-access annotation tool (Ratsnake). We detail an experiment based on the KID database, examining differences in SB lesion measurement between human readers and a MLA. The Jaccard Index (JI) was used to evaluate similarity between annotations by the MLA and human readers. RESULTS The MLA performed best in measuring lymphangiectasias with a JI of 81 ± 6 %. The other lesion types were: angioectasias (JI 64 ± 11 %), aphthae (JI 64 ± 8 %), chylous cysts (JI 70 ± 14 %), polypoid lesions (JI 75 ± 21 %), and ulcers (JI 56 ± 9 %). CONCLUSION MLA can perform as well as human readers in the measurement of SB angioectasias in white light (WL). Automated lesion measurement is therefore feasible. KID is currently the only open-source CE database developed specifically to aid development of MDSS. Our experiment demonstrates this potential.
Collapse
Affiliation(s)
- Anastasios Koulaouzidis
- Centre for Liver and Digestive Disorders, The Royal Infirmary of Edinburgh,
Edinburgh, UK,Corresponding author Anastasios Koulaouzidis, MD, FECG, FACG, FASGE The Royal Infirmary of EdinburghEndoscopy Unit51 Little France CrescentEdinburgh EH16 4SAUK
| | - Dimitris K. Iakovidis
- University of Thessaly, Department of Computer Science and Biomedical
Informatics, Volos, Thessaly, Greece
| | - Diana E. Yung
- Centre for Liver and Digestive Disorders, The Royal Infirmary of Edinburgh,
Edinburgh, UK
| | | | - Uri Kopylov
- Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, and Sackler
School of Medicine, Tel Aviv University, Tel-Aviv, Israel
| | - John N. Plevris
- Centre for Liver and Digestive Disorders, The Royal Infirmary of Edinburgh,
Edinburgh, UK
| | - Ervin Toth
- Department of Gastroenterology, Skåne University Hospital, Lund University,
Malmö, Sweden
| | - Abraham Eliakim
- Department of Gastroenterology, Sheba Medical Center, Tel Hashomer, and Sackler
School of Medicine, Tel Aviv University, Tel-Aviv, Israel
| | | | - Wojciech Marlicz
- Department of Gastroenterology, Pomeranian Medical University, Szezecin,
Poland
| | | | - Artur Nemeth
- Department of Gastroenterology, Skåne University Hospital, Lund University,
Malmö, Sweden
| | | | - Gian Eugenio Tontini
- Gastroenterology and Digestive Endoscopy Unit, IRCCS Policlinico San Donato,
Milan, Italy
| |
Collapse
|
15
|
Deeba F, Mohammed SK, Bui FM, Wahid KA. Efficacy Evaluation of SAVE for the Diagnosis of Superficial Neoplastic Lesion. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE-JTEHM 2017; 5:1800312. [PMID: 28560120 PMCID: PMC5444410 DOI: 10.1109/jtehm.2017.2691339] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/09/2016] [Revised: 02/22/2017] [Accepted: 03/12/2017] [Indexed: 12/23/2022]
Abstract
The detection of non-polypoid superficial neoplastic lesions using current standard of white light endoscopy surveillance and random biopsy is associated with high miss rate. The subtle changes in mucosa caused by the flat and depressed neoplasms often go undetected and do not qualify for further investigation, e.g., biopsy and resection, thus increasing the risk of cancer advancement. This paper presents a screening tool named the saliency-aided visual enhancement (SAVE) method, with an objective of highlighting abnormalities in endoscopic images to detect early lesions. SAVE is a hybrid system combining image enhancement and saliency detection. The method provides both qualitative enhancement and quantitative suspicion index for endoscopic image regions. A study to evaluate the efficacy of SAVE to localize superficial neoplastic lesion was performed. Experimental results for average overlap index >0.7 indicated that SAVE was successful to localize the lesion areas. The area under the receiver-operating characteristic curve obtained for SAVE was 94.91%. A very high sensitivity (100%) was achieved with a moderate specificity (65.45%). Visual inspection showed a comparable performance of SAVE with chromoendoscopy to highlight mucosal irregularities. This paper suggests that SAVE could be a potential screening tool that can substitute the application of burdensome chromoendoscopy technique. SAVE method, as a simple, easy-to-use, highly sensitive, and consistent red flag technology, will be useful for early detection of neoplasm in clinical applications.
Collapse
|
16
|
Iakovidis DK, Chatzis D, Chrysanthopoulos P, Koulaouzidis A. Blood detection in wireless capsule endoscope images based on salient superpixels. 2015 37TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC) 2015; 2015:731-4. [PMID: 26736366 DOI: 10.1109/embc.2015.7318466] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
17
|
Iakovidis DK, Koulaouzidis A. Software for enhanced video capsule endoscopy: challenges for essential progress. Nat Rev Gastroenterol Hepatol 2015; 12:172-86. [PMID: 25688052 DOI: 10.1038/nrgastro.2015.13] [Citation(s) in RCA: 106] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Video capsule endoscopy (VCE) has revolutionized the diagnostic work-up in the field of small bowel diseases. Furthermore, VCE has the potential to become the leading screening technique for the entire gastrointestinal tract. Computational methods that can be implemented in software can enhance the diagnostic yield of VCE both in terms of efficiency and diagnostic accuracy. Since the appearance of the first capsule endoscope in clinical practice in 2001, information technology (IT) research groups have proposed a variety of such methods, including algorithms for detecting haemorrhage and lesions, reducing the reviewing time, localizing the capsule or lesion, assessing intestinal motility, enhancing the video quality and managing the data. Even though research is prolific (as measured by publication activity), the progress made during the past 5 years can only be considered as marginal with respect to clinically significant outcomes. One thing is clear-parallel pathways of medical and IT scientists exist, each publishing in their own area, but where do these research pathways meet? Could the proposed IT plans have any clinical effect and do clinicians really understand the limitations of VCE software? In this Review, we present an in-depth critical analysis that aims to inspire and align the agendas of the two scientific groups.
Collapse
Affiliation(s)
- Dimitris K Iakovidis
- Department of Computer Engineering, Technological Educational Institute of Central Greece, 3rd Km Old National Road Lamia-Athens, Lamia PC 35 100, Greece
| | - Anastasios Koulaouzidis
- The Royal Infirmary of Edinburgh, Endoscopy Unit, 51 Little France Crescent, Old Dalkeith Road, Edinburgh EH16 4SA, UK
| |
Collapse
|
18
|
Computer-assisted segmentation of videocapsule images using alpha-divergence-based active contour in the framework of intestinal pathologies detection. Int J Biomed Imaging 2014; 2014:428583. [PMID: 25587264 PMCID: PMC4281406 DOI: 10.1155/2014/428583] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2014] [Revised: 11/17/2014] [Accepted: 11/22/2014] [Indexed: 01/10/2023] Open
Abstract
Visualization of the entire length of the gastrointestinal tract through natural orifices is a challenge for endoscopists. Videoendoscopy is currently the “gold standard” technique for diagnosis of different pathologies of the intestinal tract. Wireless capsule endoscopy (WCE) has been developed in the 1990s as an alternative to videoendoscopy to allow direct examination of the gastrointestinal tract without any need for sedation. Nevertheless, the systematic postexamination by the specialist of the 50,000 (for the small bowel) to 150,000 images (for the colon) of a complete acquisition using WCE remains time-consuming and challenging due to the poor quality of WCE images. In this paper, a semiautomatic segmentation for analysis of WCE images is proposed. Based on active contour segmentation, the proposed method introduces alpha-divergences, a flexible statistical similarity measure that gives a real flexibility to different types of gastrointestinal pathologies. Results of segmentation using the proposed approach are shown on different types of real-case examinations, from (multi)polyp(s) segmentation, to radiation enteritis delineation.
Collapse
|
19
|
Automatic lesion detection in capsule endoscopy based on color saliency: closer to an essential adjunct for reviewing software. Gastrointest Endosc 2014; 80:877-83. [PMID: 25088924 DOI: 10.1016/j.gie.2014.06.026] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/10/2014] [Accepted: 06/05/2014] [Indexed: 12/12/2022]
Abstract
BACKGROUND The advent of wireless capsule endoscopy (WCE) has revolutionized the diagnostic approach to small-bowel disease. However, the task of reviewing WCE video sequences is laborious and time-consuming; software tools offering automated video analysis would enable a timelier and potentially a more accurate diagnosis. OBJECTIVE To assess the validity of innovative, automatic lesion-detection software in WCE. DESIGN/INTERVENTION A color feature-based pattern recognition methodology was devised and applied to the aforementioned image group. SETTING This study was performed at the Royal Infirmary of Edinburgh, United Kingdom, and the Technological Educational Institute of Central Greece, Lamia, Greece. MATERIALS A total of 137 deidentified WCE single images, 77 showing pathology and 60 normal images. RESULTS The proposed methodology, unlike state-of-the-art approaches, is capable of detecting several different types of lesions. The average performance, in terms of the area under the receiver-operating characteristic curve, reached 89.2 ± 0.9%. The best average performance was obtained for angiectasias (97.5 ± 2.4%) and nodular lymphangiectasias (96.3 ± 3.6%). LIMITATIONS Single expert for annotation of pathologies, single type of WCE model, use of single images instead of entire WCE videos. CONCLUSION A simple, yet effective, approach allowing automatic detection of all types of abnormalities in capsule endoscopy is presented. Based on color pattern recognition, it outperforms previous state-of-the-art approaches. Moreover, it is robust in the presence of luminal contents and is capable of detecting even very small lesions.
Collapse
|