1
|
Huang C, Shi Y, Zhang B, Lyu K. Uncertainty-aware prototypical learning for anomaly detection in medical images. Neural Netw 2024; 175:106284. [PMID: 38593560 DOI: 10.1016/j.neunet.2024.106284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 03/14/2024] [Accepted: 03/29/2024] [Indexed: 04/11/2024]
Abstract
Anomalous object detection (AOD) in medical images aims to recognize the anomalous lesions, and is crucial for early clinical diagnosis of various cancers. However, it is a difficult task because of two reasons: (1) the diversity of the anomalous lesions and (2) the ambiguity of the boundary between anomalous lesions and their normal surroundings. Unlike existing single-modality AOD models based on deterministic mapping, we constructed a probabilistic and deterministic AOD model. Specifically, we designed an uncertainty-aware prototype learning framework, which considers the diversity and ambiguity of anomalous lesions. A prototypical learning transformer (Pformer) is established to extract and store the prototype features of different anomalous lesions. Moreover, Bayesian neural uncertainty quantizer, a probabilistic model, is designed to model the distributions over the outputs of the model to measure the uncertainty of the model's detection results for each pixel. Essentially, the uncertainty of the model's anomaly detection result for a pixel can reflect the anomalous ambiguity of this pixel. Furthermore, an uncertainty-guided reasoning transformer (Uformer) is devised to employ the anomalous ambiguity, encouraging the proposed model to focus on pixels with high uncertainty. Notably, prototypical representations stored in Pformer are also utilized in anomaly reasoning that enables the model to perceive diversities of the anomalous objects. Extensive experiments on five benchmark datasets demonstrate the superiority of our proposed method. The source code will be available in github.com/umchaohuang/UPformer.
Collapse
Affiliation(s)
- Chao Huang
- PAMI Research Group, Department of Computer and Information Science, University of Macau, Taipa, 519000, Macao Special Administrative Region of China; Shenzhen Campus of Sun Yat-sen University, School of Cyber Science and Technology, Shenzhen, 518107, China
| | - Yushu Shi
- Shenzhen Campus of Sun Yat-sen University, School of Cyber Science and Technology, Shenzhen, 518107, China
| | - Bob Zhang
- PAMI Research Group, Department of Computer and Information Science, University of Macau, Taipa, 519000, Macao Special Administrative Region of China.
| | - Ke Lyu
- School of Engineering Sciences, University of the Chinese Academy of Sciences, Beijing, 100049, China; Pengcheng Laboratory, Shenzhen, 518055, China
| |
Collapse
|
2
|
Wang H, Hu T, Zhang Y, Zhang H, Qi Y, Wang L, Ma J, Du M. Unveiling camouflaged and partially occluded colorectal polyps: Introducing CPSNet for accurate colon polyp segmentation. Comput Biol Med 2024; 171:108186. [PMID: 38394804 DOI: 10.1016/j.compbiomed.2024.108186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 02/02/2024] [Accepted: 02/18/2024] [Indexed: 02/25/2024]
Abstract
BACKGROUND Segmenting colorectal polyps presents a significant challenge due to the diverse variations in their size, shape, texture, and intricate backgrounds. Particularly demanding are the so-called "camouflaged" polyps, which are partially concealed by surrounding tissues or fluids, adding complexity to their detection. METHODS We present CPSNet, an innovative model designed for camouflaged polyp segmentation. CPSNet incorporates three key modules: the Deep Multi-Scale-Feature Fusion Module, the Camouflaged Object Detection Module, and the Multi-Scale Feature Enhancement Module. These modules work collaboratively to improve the segmentation process, enhancing both robustness and accuracy. RESULTS Our experiments confirm the effectiveness of CPSNet. When compared to state-of-the-art methods in colon polyp segmentation, CPSNet consistently outperforms the competition. Particularly noteworthy is its performance on the ETIS-LaribPolypDB dataset, where CPSNet achieved a remarkable 2.3% increase in the Dice coefficient compared to the Polyp-PVT model. CONCLUSION In summary, CPSNet marks a significant advancement in the field of colorectal polyp segmentation. Its innovative approach, encompassing multi-scale feature fusion, camouflaged object detection, and feature enhancement, holds considerable promise for clinical applications.
Collapse
Affiliation(s)
- Huafeng Wang
- School of Information Technology, North China University of Technology, Beijing 100041, China.
| | - Tianyu Hu
- School of Information Technology, North China University of Technology, Beijing 100041, China.
| | - Yanan Zhang
- School of Information Technology, North China University of Technology, Beijing 100041, China.
| | - Haodu Zhang
- School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou 510335, China.
| | - Yong Qi
- School of Information Technology, North China University of Technology, Beijing 100041, China.
| | - Longzhen Wang
- Department of Gastroenterology, Second People's Hospital, Changzhi, Shanxi 046000, China.
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510335, China.
| | - Minghua Du
- Department of Emergency, PLA General Hospital, Beijing 100853, China.
| |
Collapse
|
3
|
Zhu S, Gao J, Liu L, Yin M, Lin J, Xu C, Xu C, Zhu J. Public Imaging Datasets of Gastrointestinal Endoscopy for Artificial Intelligence: a Review. J Digit Imaging 2023; 36:2578-2601. [PMID: 37735308 PMCID: PMC10584770 DOI: 10.1007/s10278-023-00844-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 05/03/2023] [Accepted: 05/03/2023] [Indexed: 09/23/2023] Open
Abstract
With the advances in endoscopic technologies and artificial intelligence, a large number of endoscopic imaging datasets have been made public to researchers around the world. This study aims to review and introduce these datasets. An extensive literature search was conducted to identify appropriate datasets in PubMed, and other targeted searches were conducted in GitHub, Kaggle, and Simula to identify datasets directly. We provided a brief introduction to each dataset and evaluated the characteristics of the datasets included. Moreover, two national datasets in progress were discussed. A total of 40 datasets of endoscopic images were included, of which 34 were accessible for use. Basic and detailed information on each dataset was reported. Of all the datasets, 16 focus on polyps, and 6 focus on small bowel lesions. Most datasets (n = 16) were constructed by colonoscopy only, followed by normal gastrointestinal endoscopy and capsule endoscopy (n = 9). This review may facilitate the usage of public dataset resources in endoscopic research.
Collapse
Affiliation(s)
- Shiqi Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Jingwen Gao
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Lu Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Minyue Yin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Chang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China
| | - Chunfang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China.
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China.
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou , Jiangsu, 215000, China.
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215000, China.
| |
Collapse
|
4
|
Jin Q, Hou H, Zhang G, Li Z. FEGNet: A Feedback Enhancement Gate Network for Automatic Polyp Segmentation. IEEE J Biomed Health Inform 2023; 27:3420-3430. [PMID: 37126617 DOI: 10.1109/jbhi.2023.3272168] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Regular colonoscopy is an effective way to prevent colorectal cancer by detecting colorectal polyps. Automatic polyp segmentation significantly aids clinicians in precisely locating polyp areas for further diagnosis. However, polyp segmentation is a challenge problem, since polyps appear in a variety of shapes, sizes and textures, and they tend to have ambiguous boundaries. In this paper, we propose a U-shaped model named Feedback Enhancement Gate Network (FEGNet) for accurate polyp segmentation to overcome these difficulties. Specifically, for the high-level features, we design a novel Recurrent Gate Module (RGM) based on the feedback mechanism, which can refine attention maps without any additional parameters. RGM consists of Feature Aggregation Attention Gate (FAAG) and Multi-Scale Module (MSM). FAAG can aggregate context and feedback information, and MSM is applied for capturing multi-scale information, which is critical for the segmentation task. In addition, we propose a straightforward but effective edge extraction module to detect boundaries of polyps for low-level features, which is used to guide the training of early features. In our experiments, quantitative and qualitative evaluations show that the proposed FEGNet has achieved the best results in polyp segmentation compared to other state-of-the-art models on five colonoscopy datasets.
Collapse
|
5
|
Houwen BBSL, Nass KJ, Vleugels JLA, Fockens P, Hazewinkel Y, Dekker E. Comprehensive review of publicly available colonoscopic imaging databases for artificial intelligence research: availability, accessibility, and usability. Gastrointest Endosc 2023; 97:184-199.e16. [PMID: 36084720 DOI: 10.1016/j.gie.2022.08.043] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/24/2022] [Accepted: 08/30/2022] [Indexed: 01/28/2023]
Abstract
BACKGROUND AND AIMS Publicly available databases containing colonoscopic imaging data are valuable resources for artificial intelligence (AI) research. Currently, little is known regarding the available number and content of these databases. This review aimed to describe the availability, accessibility, and usability of publicly available colonoscopic imaging databases, focusing on polyp detection, polyp characterization, and quality of colonoscopy. METHODS A systematic literature search was performed in MEDLINE and Embase to identify AI studies describing publicly available colonoscopic imaging databases published after 2010. Second, a targeted search using Google's Dataset Search, Google Search, GitHub, and Figshare was done to identify databases directly. Databases were included if they contained data about polyp detection, polyp characterization, or quality of colonoscopy. To assess accessibility of databases, the following categories were defined: open access, open access with barriers, and regulated access. To assess the potential usability of the included databases, essential details of each database were extracted using a checklist derived from the Checklist for Artificial Intelligence in Medical Imaging. RESULTS We identified 22 databases with open access, 3 databases with open access with barriers, and 15 databases with regulated access. The 22 open access databases contained 19,463 images and 952 videos. Nineteen of these databases focused on polyp detection, localization, and/or segmentation; 6 on polyp characterization, and 3 on quality of colonoscopy. Only half of these databases have been used by other researcher to develop, train, or benchmark their AI system. Although technical details were in general well reported, important details such as polyp and patient demographics and the annotation process were under-reported in almost all databases. CONCLUSIONS This review provides greater insight on public availability of colonoscopic imaging databases for AI research. Incomplete reporting of important details limits the ability of researchers to assess the usability of current databases.
Collapse
Affiliation(s)
- Britt B S L Houwen
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Karlijn J Nass
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Jasper L A Vleugels
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Yark Hazewinkel
- Department of Gastroenterology and Hepatology, Radboud University Nijmegen Medical Center, Radboud University of Nijmegen, Nijmegen, the Netherlands
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
6
|
Nisha JS, Gopi VARUNPALAKUZHIYIL. Colorectal polyp detection in colonoscopy videos using image enhancement and discrete orthonormal stockwell transform. SĀDHANĀ 2022; 47:234. [DOI: 10.1007/s12046-022-01970-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 08/01/2022] [Accepted: 08/09/2022] [Indexed: 04/01/2025]
|
7
|
Nisha JS, Gopi VP, Palanisamy P. COLORECTAL POLYP DETECTION USING IMAGE ENHANCEMENT AND SCALED YOLOv4 ALGORITHM. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2022; 34. [DOI: 10.4015/s1016237222500260] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2025]
Abstract
Colorectal cancer (CRC) is the common cancer-related cause of death globally. It is now the third leading cause of cancer-related mortality worldwide. As the number of instances of colorectal polyps rises, it is more important than ever to identify and diagnose them early. Object detection models have recently become popular for extracting highly representative features. Colonoscopy is shown to be a useful diagnostic procedure for examining anomalies in the digestive system’s bottom half. This research presents a novel image-enhancing approach followed by a Scaled YOLOv4 Network for the early diagnosis of polyps, lowering the high risk of CRC therapy. The proposed network is trained using the CVC ClinicDB and the CVC ColonDB and the Etis Larib database are used for testing. On the CVC ColonDB database, the performance metrics are precision (95.13%), recall (74.92%), F1-score (83.19%), and F2-score (89.89%). On the ETIS Larib database, the performance metrics are precision (94.30%), recall (77.30%), F1-score (84.90%), and F2-score (80.20%). On both the databases, the suggested methodology outperforms the present one in terms of F1-score, F2-score, and precision compared to the futuristic method. The proposed Yolo object identification model provides an accurate polyp detection strategy in a real-time application.
Collapse
Affiliation(s)
- J. S. Nisha
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| | - Varun P. Gopi
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| | - P. Palanisamy
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| |
Collapse
|
8
|
Double-Balanced Loss for Imbalanced Colorectal Lesion Classification. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:1691075. [PMID: 35979050 PMCID: PMC9377973 DOI: 10.1155/2022/1691075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Revised: 07/06/2022] [Accepted: 07/13/2022] [Indexed: 11/18/2022]
Abstract
Colorectal cancer has a high incidence rate in all countries around the world, and the survival rate of patients is improved by early detection. With the development of object detection technology based on deep learning, computer-aided diagnosis of colonoscopy medical images becomes a reality, which can effectively reduce the occurrence of missed diagnosis and misdiagnosis. In medical image recognition, the assumption that training samples follow independent identical distribution (IID) is the key to the high accuracy of deep learning. However, the classification of medical images is unbalanced in most cases. This paper proposes a new loss function named the double-balanced loss function for the deep learning model, to improve the impact of datasets on classification accuracy. It introduces the effects of sample size and sample difficulty to the loss calculation and deals with both sample size imbalance and sample difficulty imbalance. And it combines with deep learning to build the medical diagnosis model for colorectal cancer. Experimentally verified by three colorectal white-light endoscopic image datasets, the double-balanced loss function proposed in this paper has better performance on the imbalance classification problem of colorectal medical images.
Collapse
|
9
|
Nisha JS, Gopi VP, Palanisamy P. CLASSIFICATION OF INFORMATIVE FRAMES IN COLONOSCOPY VIDEO BASED ON IMAGE ENHANCEMENT AND PHOG FEATURE EXTRACTION. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2022; 34. [DOI: 10.4015/s1016237222500156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2025]
Abstract
Colonoscopy allows doctors to check the abnormalities in the intestinal tract without any surgical operations. The major problem in the Computer-Aided Diagnosis (CAD) of colonoscopy images is the low illumination condition of the images. This study aims to provide an image enhancement method and feature extraction and classification techniques for detecting polyps in colonoscopy images. We propose a novel image enhancement method with a Pyramid Histogram of Oriented Gradients (PHOG) feature extractor to detect polyps in the colonoscopy images. The approach is evaluated across different classifiers, such as Multi-Layer Perceptron (MLP), Adaboost, Support Vector Machine (SVM), and Random Forest. The proposed method has been trained using the publicly available databases CVC ClinicDB and tested in ETIS Larib and CVC ColonDB. The proposed approach outperformed the existing state-of-the-art methods on both databases. The reliability of the classifiers’ performance was examined by comparing their F1 score, precision, F2 score, recall, and accuracy. PHOG with Random Forest classifier outperformed the existing methods in terms of recall of 97.95%, precision 98.46%, F1 score 98.20%, F2 score of 98.00%, and accuracy of 98.21% in the CVC-ColonDB. In the ETIS-LARIB dataset it attained a recall value of 96.83%, precision 98.65%, F1 score 97.73%, F2 score 98.59%, and accuracy of 97.75%. We observed that the proposed image enhancement method with PHOG feature extraction and the Random Forest classifier will help doctors to evaluate and analyze anomalies from colonoscopy data and make decisions quickly.
Collapse
Affiliation(s)
- J. S. Nisha
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| | - Varun P. Gopi
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| | - P. Palanisamy
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
| |
Collapse
|
10
|
Nisha J, P. Gopi V, Palanisamy P. Automated colorectal polyp detection based on image enhancement and dual-path CNN architecture. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103465] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
11
|
Nisha JS, Gopi VP, Palanisamy P. AUTOMATED POLYP DETECTION IN COLONOSCOPY VIDEOS USING IMAGE ENHANCEMENT AND SALIENCY DETECTION ALGORITHM. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2022; 34. [DOI: 10.4015/s1016237222500016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2025]
Abstract
Colonoscopy has proven to be an active diagnostic tool that examines the lower half of the digestive system’s anomalies. This paper confers a Computer-Aided Detection (CAD) method for polyps from colonoscopy images that helps to diagnose the early stage of Colorectal Cancer (CRC). The proposed method consists primarily of image enhancement, followed by the creation of a saliency map, feature extraction using the Histogram of Oriented-Gradients (HOG) feature extractor, and classification using the Support Vector Machine (SVM). We present an efficient image enhancement algorithm for highlighting clinically significant features in colonoscopy images. The proposed enhancement approach can improve the overall contrast and brightness by minimizing the effects of inconsistent illumination conditions. Detailed experiments have been conducted using the publicly available colonoscopy databases CVC ClinicDB, CVC ColonDB and the ETIS Larib. The performance measures are found to be in terms of precision (91.69%), recall (81.53%), F1-score (86.31%) and F2-score (89.45%) for the CVC ColonDB database and precision (90.29%), recall (61.73%), F1-score (73.32%) and F2-score (82.64%) for the ETIS Larib database. Comparison with the futuristic method shows that the proposed approach surpasses the existing one in terms of precision, F1-score, and F2-score. The proposed enhancement with saliency-based selection significantly reduced the number of search windows, resulting in an efficient polyp detection algorithm.
Collapse
Affiliation(s)
- J. S. Nisha
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, 620015, Tamil Nadu, India
| | - V. P. Gopi
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, 620015, Tamil Nadu, India
| | - P. Palanisamy
- Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, 620015, Tamil Nadu, India
| |
Collapse
|
12
|
Wang S, Yin Y, Wang D, Lv Z, Wang Y, Jin Y. An interpretable deep neural network for colorectal polyp diagnosis under colonoscopy. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
13
|
Yang X, Wei Q, Zhang C, Zhou K, Kong L, Jiang W. Colon Polyp Detection and Segmentation Based on Improved MRCNN. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 2021; 70:1-10. [DOI: 10.1109/tim.2020.3038011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
14
|
Qadir HA, Shin Y, Solhusvik J, Bergsland J, Aabakken L, Balasingham I. Toward real-time polyp detection using fully CNNs for 2D Gaussian shapes prediction. Med Image Anal 2020; 68:101897. [PMID: 33260111 DOI: 10.1016/j.media.2020.101897] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Revised: 10/26/2020] [Accepted: 10/28/2020] [Indexed: 12/24/2022]
Abstract
To decrease colon polyp miss-rate during colonoscopy, a real-time detection system with high accuracy is needed. Recently, there have been many efforts to develop models for real-time polyp detection, but work is still required to develop real-time detection algorithms with reliable results. We use single-shot feed-forward fully convolutional neural networks (F-CNN) to develop an accurate real-time polyp detection system. F-CNNs are usually trained on binary masks for object segmentation. We propose the use of 2D Gaussian masks instead of binary masks to enable these models to detect different types of polyps more effectively and efficiently and reduce the number of false positives. The experimental results showed that the proposed 2D Gaussian masks are efficient for detection of flat and small polyps with unclear boundaries between background and polyp parts. The masks make a better training effect to discriminate polyps from the polyp-like false positives. The proposed method achieved state-of-the-art results on two polyp datasets. On the ETIS-LARIB dataset we achieved 86.54% recall, 86.12% precision, and 86.33% F1-score, and on the CVC-ColonDB we achieved 91% recall, 88.35% precision, and F1-score 89.65%.
Collapse
Affiliation(s)
- Hemin Ali Qadir
- Intervention Centre, Oslo University Hospital, Oslo, Norway; Department of Informatics, University of Oslo, Oslo, Norway; OmniVision Technologies Norway AS, Oslo, Norway.
| | - Younghak Shin
- Department of Computer Engineering, Mokpo National University, Mokpo, Korea.
| | | | | | - Lars Aabakken
- Department of Transplantation Medicine, University of Oslo, Oslo, Norway
| | - Ilangko Balasingham
- Intervention Centre, Oslo University Hospital, Oslo, Norway; Department of Electronic Systems, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
15
|
Feng D, Chen X, Zhou Z, Liu H, Wang Y, Bai L, Zhang S, Mou X. A Preliminary Study of Predicting Effectiveness of Anti-VEGF Injection Using OCT Images Based on Deep Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:5428-5431. [PMID: 33019208 DOI: 10.1109/embc44109.2020.9176743] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Deep learning based radiomics have made great progress such as CNN based diagnosis and U-Net based segmentation. However, the prediction of drug effectiveness based on deep learning has fewer studies. Choroidal neovascularization (CNV) and cystoid macular edema (CME) are the diseases often leading to a sudden onset but progressive decline in central vision. And the curative treatment using anti-vascular endothelial growth factor (anti-VEGF) may not be effective for some patients. Therefore, the prediction of the effectiveness of anti-VEGF for patients is important. With the development of Convolutional Neural Networks (CNNs) coupled with transfer learning, medical image classifications have achieved great success. We used a method based on transfer learning to automatically predict the effectiveness of anti-VEGF by Optical Coherence tomography (OCT) images before giving medication. The method consists of image preprocessing, data augmentation and CNN-based transfer learning, the prediction AUC can be over 0.8. We also made a comparison study of using lesion region images and full OCT images on this task. Experiments shows that using the full OCT images can obtain better performance. Different deep neural networks such as AlexNet, VGG-16, GooLeNet and ResNet-50 were compared, and the modified ResNet-50 is more suitable for predicting the effectiveness of anti-VEGF.Clinical Relevance - This prediction model can give an estimation of whether anti-VEGF is effective for patients with CNV or CME, which can help ophthalmologists make treatment plan.
Collapse
|
16
|
Laiz P, Vitrià J, Wenzek H, Malagelada C, Azpiroz F, Seguí S. WCE polyp detection with triplet based embeddings. Comput Med Imaging Graph 2020; 86:101794. [PMID: 33130417 DOI: 10.1016/j.compmedimag.2020.101794] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 09/17/2020] [Accepted: 09/18/2020] [Indexed: 12/20/2022]
Abstract
Wireless capsule endoscopy is a medical procedure used to visualize the entire gastrointestinal tract and to diagnose intestinal conditions, such as polyps or bleeding. Current analyses are performed by manually inspecting nearly each one of the frames of the video, a tedious and error-prone task. Automatic image analysis methods can be used to reduce the time needed for physicians to evaluate a capsule endoscopy video. However these methods are still in a research phase. In this paper we focus on computer-aided polyp detection in capsule endoscopy images. This is a challenging problem because of the diversity of polyp appearance, the imbalanced dataset structure and the scarcity of data. We have developed a new polyp computer-aided decision system that combines a deep convolutional neural network and metric learning. The key point of the method is the use of the Triplet Loss function with the aim of improving feature extraction from the images when having small dataset. The Triplet Loss function allows to train robust detectors by forcing images from the same category to be represented by similar embedding vectors while ensuring that images from different categories are represented by dissimilar vectors. Empirical results show a meaningful increase of AUC values compared to state-of-the-art methods. A good performance is not the only requirement when considering the adoption of this technology to clinical practice. Trust and explainability of decisions are as important as performance. With this purpose, we also provide a method to generate visual explanations of the outcome of our polyp detector. These explanations can be used to build a physician's trust in the system and also to convey information about the inner working of the method to the designer for debugging purposes.
Collapse
Affiliation(s)
- Pablo Laiz
- Department of Mathematics and Computer Science, Universitat de Barcelona, Barcelona, Spain.
| | - Jordi Vitrià
- Department of Mathematics and Computer Science, Universitat de Barcelona, Barcelona, Spain
| | | | - Carolina Malagelada
- Digestive System Research Unit, University Hospital Vall d'Hebron, Barcelona, Spain
| | - Fernando Azpiroz
- Digestive System Research Unit, University Hospital Vall d'Hebron, Barcelona, Spain
| | - Santi Seguí
- Department of Mathematics and Computer Science, Universitat de Barcelona, Barcelona, Spain
| |
Collapse
|
17
|
Rahim T, Usman MA, Shin SY. A survey on contemporary computer-aided tumor, polyp, and ulcer detection methods in wireless capsule endoscopy imaging. Comput Med Imaging Graph 2020; 85:101767. [DOI: 10.1016/j.compmedimag.2020.101767] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 07/13/2020] [Accepted: 07/18/2020] [Indexed: 12/12/2022]
|
18
|
Hwang M, Wang D, Kong XX, Wang Z, Li J, Jiang WC, Hwang KS, Ding K. An automated detection system for colonoscopy images using a dual encoder-decoder model. Comput Med Imaging Graph 2020; 84:101763. [PMID: 32805673 DOI: 10.1016/j.compmedimag.2020.101763] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2019] [Revised: 07/10/2020] [Accepted: 07/10/2020] [Indexed: 02/07/2023]
Abstract
Conventional computer-aided detection systems (CADs) for colonoscopic images utilize shape, texture, or temporal information to detect polyps, so they have limited sensitivity and specificity. This study proposes a method to extract possible polyp features automatically using convolutional neural networks (CNNs). The objective of this work aims at building up a light-weight dual encoder-decoder model structure for polyp detection in colonoscopy Images. This proposed model, though with a relatively shallow structure, is expected to have the capability of a similar performance to the methods with much deeper structures. The proposed CAD model consists of two sequential encoder-decoder networks that consist of several CNN layers and full connection layers. The front end of the model is a hetero-associator (also known as hetero-encoder) that uses backpropagation learning to generate a set of reliably corrupted labeled images with a certain degree of similarity to a ground truth image, which eliminates the need for a large amount of training data that is usually required for medical images tasks. This dual CNN architecture generates a set of noisy images that are similar to the labeled data to train its counterpart, the auto-associator (also known as auto-encoder), in order to increase the successor's discriminative power in classification. The auto-encoder is also equipped with CNNs to simultaneously capture the features of the labeled images that contain noise. The proposed method uses features that are learned from open medical datasets and the dataset of Zhejiang University (ZJU), which contains around one thousand images. The performance of the proposed architecture is compared with a state-of-the-art detection model in terms of the metrics of the Jaccard index, the DICE similarity score, and two other geometric measures. The improvements in the performance of the proposed model are attributed to the effective reduction in false positives in the auto-encoder and the generation of noisy candidate images by the hetero-encoder.
Collapse
Affiliation(s)
- Maxwell Hwang
- Department of Colorectal Surgery, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China; Cancer Institute (Key Laboratory of Cancer Prevention and Intervention, China National Ministry of Education, Key Laboratory of Molecular Biology in Medical Sciences, Zhejiang Province, China; The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Da Wang
- Department of Colorectal Surgery, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China; Cancer Institute (Key Laboratory of Cancer Prevention and Intervention, China National Ministry of Education, Key Laboratory of Molecular Biology in Medical Sciences, Zhejiang Province, China; The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Xiang-Xing Kong
- Department of Colorectal Surgery, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China; Cancer Institute (Key Laboratory of Cancer Prevention and Intervention, China National Ministry of Education, Key Laboratory of Molecular Biology in Medical Sciences, Zhejiang Province, China; The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Zhanhuai Wang
- Department of Colorectal Surgery, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China; Cancer Institute (Key Laboratory of Cancer Prevention and Intervention, China National Ministry of Education, Key Laboratory of Molecular Biology in Medical Sciences, Zhejiang Province, China; The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Jun Li
- Department of Colorectal Surgery, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China; Cancer Institute (Key Laboratory of Cancer Prevention and Intervention, China National Ministry of Education, Key Laboratory of Molecular Biology in Medical Sciences, Zhejiang Province, China; The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Wei-Cheng Jiang
- Department of Electrical Engineering, Tunghai University, Taichung, Taiwan, China
| | - Kao-Shing Hwang
- Department of Electrical Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan, China
| | - Kefeng Ding
- Department of Colorectal Surgery, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China; Cancer Institute (Key Laboratory of Cancer Prevention and Intervention, China National Ministry of Education, Key Laboratory of Molecular Biology in Medical Sciences, Zhejiang Province, China; The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China.
| |
Collapse
|
19
|
Poon CCY, Jiang Y, Zhang R, Lo WWY, Cheung MSH, Yu R, Zheng Y, Wong JCT, Liu Q, Wong SH, Mak TWC, Lau JYW. AI-doscopist: a real-time deep-learning-based algorithm for localising polyps in colonoscopy videos with edge computing devices. NPJ Digit Med 2020; 3:73. [PMID: 32435701 PMCID: PMC7235017 DOI: 10.1038/s41746-020-0281-z] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2019] [Accepted: 04/28/2020] [Indexed: 12/24/2022] Open
Abstract
We have designed a deep-learning model, an "Artificial Intelligent Endoscopist (a.k.a. AI-doscopist)", to localise colonic neoplasia during colonoscopy. This study aims to evaluate the agreement between endoscopists and AI-doscopist for colorectal neoplasm localisation. AI-doscopist was pre-trained by 1.2 million non-medical images and fine-tuned by 291,090 colonoscopy and non-medical images. The colonoscopy images were obtained from six databases, where the colonoscopy images were classified into 13 categories and the polyps' locations were marked image-by-image by the smallest bounding boxes. Seven categories of non-medical images, which were believed to share some common features with colorectal polyps, were downloaded from an online search engine. Written informed consent were obtained from 144 patients who underwent colonoscopy and their full colonoscopy videos were prospectively recorded for evaluation. A total of 128 suspicious lesions were resected or biopsied for histological confirmation. When evaluated image-by-image on the 144 full colonoscopies, the specificity of AI-doscopist was 93.3%. AI-doscopist were able to localise 124 out of 128 polyps (polyp-based sensitivity = 96.9%). Furthermore, after reviewing the suspected regions highlighted by AI-doscopist in a 102-patient cohort, an endoscopist has high confidence in recognizing four missed polyps in three patients who were not diagnosed with any lesion during their original colonoscopies. In summary, AI-doscopist can localise 96.9% of the polyps resected by the endoscopists. If AI-doscopist were to be used in real-time, it can potentially assist endoscopists in detecting one more patient with polyp in every 20-33 colonoscopies.
Collapse
Affiliation(s)
- Carmen C. Y. Poon
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Yuqi Jiang
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Ruikai Zhang
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Winnie W. Y. Lo
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Maggie S. H. Cheung
- Division of Vascular and General Surgery, Department of Surgery, Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Ruoxi Yu
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Yali Zheng
- Division of Biomedical Engineering Research, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, People’s Republic of China
| | - John C. T. Wong
- Division of Gastroenterology and Hepatology, Department of Medicine and Therapeutics, Institute of Digestive Disease, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Qing Liu
- Department of Electrical and Electronic Engineering, Xi’an Jiaotong-Liverpool University, Suzhou, People’s Republic of China
| | - Sunny H. Wong
- Division of Gastroenterology and Hepatology, Department of Medicine and Therapeutics, Institute of Digestive Disease, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - Tony W. C. Mak
- Division of Colorectal Surgery, Department of Surgery, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| | - James Y. W. Lau
- Division of Vascular and General Surgery, Department of Surgery, Prince of Wales Hospital, The Chinese University of Hong Kong, Hong Kong SAR, People’s Republic of China
| |
Collapse
|
20
|
Delgado-Osuna JA, García-Martínez C, Gómez-Barbadillo J, Ventura S. Heuristics for interesting class association rule mining a colorectal cancer database. Inf Process Manag 2020. [DOI: 10.1016/j.ipm.2020.102207] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
21
|
Maglogiannis I, Iliadis L, Pimenidis E. Overlap-Based Undersampling Method for Classification of Imbalanced Medical Datasets. IFIP ADVANCES IN INFORMATION AND COMMUNICATION TECHNOLOGY 2020. [PMCID: PMC7256568 DOI: 10.1007/978-3-030-49186-4_30] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Early diagnosis of some life-threatening diseases such as cancers and heart is crucial for effective treatments. Supervised machine learning has proved to be a very useful tool to serve this purpose. Historical data of patients including clinical and demographic information is used for training learning algorithms. This builds predictive models that provide initial diagnoses. However, in the medical domain, it is common to have the positive class under-represented in a dataset. In such a scenario, a typical learning algorithm tends to be biased towards the negative class, which is the majority class, and misclassify positive cases. This is known as the class imbalance problem. In this paper, a framework for predictive diagnostics of diseases with imbalanced records is presented. To reduce the classification bias, we propose the usage of an overlap-based undersampling method to improve the visibility of minority class samples in the region where the two classes overlap. This is achieved by detecting and removing negative class instances from the overlapping region. This will improve class separability in the data space. Experimental results show achievement of high accuracy in the positive class, which is highly preferable in the medical domain, while good trade-offs between sensitivity and specificity were obtained. Results also show that the method often outperformed other state-of-the-art and well-established techniques.
Collapse
Affiliation(s)
| | - Lazaros Iliadis
- Department of Civil Engineering, Lab of Mathematics and Informatics (ISCE), Democritus University of Thrace, Xanthi, Greece
| | - Elias Pimenidis
- Department of Computer Science and Creative Technologies, University of the West of England, Bristol, UK
| |
Collapse
|
22
|
Deeba F, Bui FM, Wahid KA. Computer-aided polyp detection based on image enhancement and saliency-based selection. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.04.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
23
|
Yamada M, Saito Y, Imaoka H, Saiko M, Yamada S, Kondo H, Takamaru H, Sakamoto T, Sese J, Kuchiba A, Shibata T, Hamamoto R. Development of a real-time endoscopic image diagnosis support system using deep learning technology in colonoscopy. Sci Rep 2019; 9:14465. [PMID: 31594962 PMCID: PMC6783454 DOI: 10.1038/s41598-019-50567-5] [Citation(s) in RCA: 135] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Accepted: 09/04/2019] [Indexed: 12/11/2022] Open
Abstract
Gaps in colonoscopy skills among endoscopists, primarily due to experience, have been identified, and solutions are critically needed. Hence, the development of a real-time robust detection system for colorectal neoplasms is considered to significantly reduce the risk of missed lesions during colonoscopy. Here, we develop an artificial intelligence (AI) system that automatically detects early signs of colorectal cancer during colonoscopy; the AI system shows the sensitivity and specificity are 97.3% (95% confidence interval [CI] = 95.9%–98.4%) and 99.0% (95% CI = 98.6%–99.2%), respectively, and the area under the curve is 0.975 (95% CI = 0.964–0.986) in the validation set. Moreover, the sensitivities are 98.0% (95% CI = 96.6%–98.8%) in the polypoid subgroup and 93.7% (95% CI = 87.6%–96.9%) in the non-polypoid subgroup; To accelerate the detection, tensor metrics in the trained model was decomposed, and the system can predict cancerous regions 21.9 ms/image on average. These findings suggest that the system is sufficient to support endoscopists in the high detection against non-polypoid lesions, which are frequently missed by optical colonoscopy. This AI system can alert endoscopists in real-time to avoid missing abnormalities such as non-polypoid polyps during colonoscopy, improving the early detection of this disease.
Collapse
Affiliation(s)
- Masayoshi Yamada
- Endoscopy Division, National Cancer Center Hospital, Tokyo, Japan. .,Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, Tokyo, Japan.
| | - Yutaka Saito
- Endoscopy Division, National Cancer Center Hospital, Tokyo, Japan
| | - Hitoshi Imaoka
- Biometrics Research Laboratories, NEC Corporation, Kanagawa, Japan
| | - Masahiro Saiko
- Biometrics Research Laboratories, NEC Corporation, Kanagawa, Japan
| | - Shigemi Yamada
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, Tokyo, Japan.,Advanced Intelligence Project Center, RIKEN, Tokyo, Japan
| | - Hiroko Kondo
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, Tokyo, Japan.,Advanced Intelligence Project Center, RIKEN, Tokyo, Japan
| | | | - Taku Sakamoto
- Endoscopy Division, National Cancer Center Hospital, Tokyo, Japan
| | - Jun Sese
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology, Tokyo, Japan
| | - Aya Kuchiba
- Biostatistics Division, National Cancer Center, Tokyo, Japan
| | - Taro Shibata
- Biostatistics Division, National Cancer Center, Tokyo, Japan
| | - Ryuji Hamamoto
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, Tokyo, Japan.,Advanced Intelligence Project Center, RIKEN, Tokyo, Japan
| |
Collapse
|
24
|
Ellahham S, Ellahham N, Simsekler MCE. Application of Artificial Intelligence in the Health Care Safety Context: Opportunities and Challenges. Am J Med Qual 2019; 35:341-348. [DOI: 10.1177/1062860619878515] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
There is a growing awareness that artificial intelligence (AI) has been used in the analysis of complicated and big data to provide outputs without human input in various health care contexts, such as bioinformatics, genomics, and image analysis. Although this technology can provide opportunities in diagnosis and treatment processes, there still may be challenges and pitfalls related to various safety concerns. To shed light on such opportunities and challenges, this article reviews AI in health care along with its implication for safety. To provide safer technology through AI, this study shows that safe design, safety reserves, safe fail, and procedural safeguards are key strategies, whereas cost, risk, and uncertainty should be identified for all potential technical systems. It is also suggested that clear guidance and protocols should be identified and shared with all stakeholders to develop and adopt safer AI applications in the health care context.
Collapse
Affiliation(s)
- Samer Ellahham
- Cleveland Clinic Abu Dhabi, Al Falah St, Abu Dhabi, UAE
- Cleveland Clinic, Cleveland, OH
| | - Nour Ellahham
- Cleveland Clinic Abu Dhabi, Al Falah St, Abu Dhabi, UAE
| | | |
Collapse
|
25
|
Viscaino M, Cheein FA. Machine learning for computer-aided polyp detection using wavelets and content-based image. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2019:961-965. [PMID: 31946053 DOI: 10.1109/embc.2019.8857831] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The continuous growing of machine learning techniques, their capabilities improvements and the availability of data being continuously collected, recorded and updated, can enhance diagnosis stages by making it faster and more accurate than human diagnosis. In lower endoscopies procedures, most of the diagnosis relies on the capabilities and expertise of the physician. During medical training, physicians can be benefited from the assistance of algorithms able to automatically detect polyps, thus enhancing their diagnosis. In this paper, we propose a machine learning approach trained to detect polyps in lower endoscopies recordings with high accuracy and sensitivity, previously processed using wavelet transform for feature extraction. The propose system is validated using available datasets. From a set of 1132 images, our system showed a 97.9% of accuracy in diagnosing polyps, around 10% more efficient than other approaches using techniques with a low computational requirement previously published. In addition, the false positive rate was 0.03. This encouraging result can be also extended to other diagnosis.
Collapse
|
26
|
Analysis of Factors Affecting Real-Time Ridesharing Vehicle Crash Severity. SUSTAINABILITY 2019. [DOI: 10.3390/su11123334] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The popular real-time ridesharing service has promoted social and environmental sustainability in various ways. Meanwhile, it also brings some traffic safety concerns. This paper aims to analyze factors affecting real-time ridesharing vehicle crash severity based on the classification and regression tree (CART) model. The Chicago police-reported crash data from January to December 2018 is collected. Crash severity in the original dataset is highly imbalanced: only 60 out of 2624 crashes are severe injury crashes. To fix the data imbalance problem, a hybrid data preprocessing approach which combines the over- and under-sampling is applied. Model results indicate that, by resampling the crash data, the successfully predicted severe crashes are increased from 0 to 40. Besides, the G-mean is increased from 0% to 73%, and the AUC (area under the receiver operating characteristics curve) is increased from 0.73 to 0.82. The classification tree reveals that following variables are the primary indicators of real-time ridesharing vehicle crash severity: pedestrian/pedalcyclist involvement, number of passengers, weather condition, trafficway type, vehicle manufacture year, traffic control device, driver gender, lighting condition, vehicle type, driver age and crash time. The current study could provide some valuable insights for the sustainable development of real-time ridesharing services and urban transportation.
Collapse
|
27
|
Region-Based Automated Localization of Colonoscopy and Wireless Capsule Endoscopy Polyps. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9122404] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
The early detection of polyps could help prevent colorectal cancer. The automated detection of polyps on the colon walls could reduce the number of false negatives that occur due to manual examination errors or polyps being hidden behind folds, and could also help doctors locate polyps from screening tests such as colonoscopy and wireless capsule endoscopy. Losing polyps may result in lesions evolving badly. In this paper, we propose a modified region-based convolutional neural network (R-CNN) by generating masks around polyps detected from still frames. The locations of the polyps in the image are marked, which assists the doctors examining the polyps. The features from the polyp images are extracted using pre-trained Resnet-50 and Resnet-101 models through feature extraction and fine-tuning techniques. Various publicly available polyp datasets are analyzed with various pertained weights. It is interesting to notice that fine-tuning with balloon data (polyp-like natural images) improved the polyp detection rate. The optimum CNN models on colonoscopy datasets including CVC-ColonDB, CVC-PolypHD, and ETIS-Larib produced values (F1 score, F2 score) of (90.73, 91.27), (80.65, 79.11), and (76.43, 78.70) respectively. The best model on the wireless capsule endoscopy dataset gave a performance of (96.67, 96.10). The experimental results indicate the better localization of polyps compared to recent traditional and deep learning methods.
Collapse
|
28
|
|
29
|
Challen R, Denny J, Pitt M, Gompels L, Edwards T, Tsaneva-Atanasova K. Artificial intelligence, bias and clinical safety. BMJ Qual Saf 2019; 28:231-237. [PMID: 30636200 PMCID: PMC6560460 DOI: 10.1136/bmjqs-2018-008370] [Citation(s) in RCA: 348] [Impact Index Per Article: 58.0] [Reference Citation Analysis] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Revised: 11/23/2018] [Accepted: 12/06/2018] [Indexed: 02/06/2023]
Affiliation(s)
- Robert Challen
- EPSRC Centre for Predictive Modelling in Healthcare, University of Exeter College of Engineering Mathematics and Physical Sciences, Exeter, UK .,Taunton and Somerset NHS Foundation Trust, Taunton, UK
| | - Joshua Denny
- Departments of Biomedical Informatics and Medicine, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Martin Pitt
- NIHR CLAHRC for the South West Peninsula, St Luke's Campus, University of Exeter Medical School, Exeter, UK
| | - Luke Gompels
- Taunton and Somerset NHS Foundation Trust, Taunton, UK
| | - Tom Edwards
- Taunton and Somerset NHS Foundation Trust, Taunton, UK
| | - Krasimira Tsaneva-Atanasova
- EPSRC Centre for Predictive Modelling in Healthcare, University of Exeter College of Engineering Mathematics and Physical Sciences, Exeter, UK
| |
Collapse
|
30
|
Mahmood F, Chen R, Durr NJ. Unsupervised Reverse Domain Adaptation for Synthetic Medical Images via Adversarial Training. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2572-2581. [PMID: 29993538 DOI: 10.1109/tmi.2018.2842767] [Citation(s) in RCA: 107] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
To realize the full potential of deep learning for medical imaging, large annotated datasets are required for training. Such datasets are difficult to acquire due to privacy issues, lack of experts available for annotation, underrepresentation of rare conditions, and poor standardization. The lack of annotated data has been addressed in conventional vision applications using synthetic images refined via unsupervised adversarial training to look like real images. However, this approach is difficult to extend to general medical imaging because of the complex and diverse set of features found in real human tissues. We propose a novel framework that uses a reverse flow, where adversarial training is used to make real medical images more like synthetic images, and clinically-relevant features are preserved via self-regularization. These domain-adapted synthetic-like images can then be accurately interpreted by networks trained on large datasets of synthetic medical images. We implement this approach on the notoriously difficult task of depth-estimation from monocular endoscopy which has a variety of applications in colonoscopy, robotic surgery, and invasive endoscopic procedures. We train a depth estimator on a large data set of synthetic images generated using an accurate forward model of an endoscope and an anatomically-realistic colon. Our analysis demonstrates that the structural similarity of endoscopy depth estimation in a real pig colon predicted from a network trained solely on synthetic data improved by 78.7% by using reverse domain adaptation.
Collapse
|
31
|
Zhang R, Zheng Y, Poon CC, Shen D, Lau JY. Polyp detection during colonoscopy using a regression-based convolutional neural network with a tracker. PATTERN RECOGNITION 2018; 83:209-219. [PMID: 31105338 PMCID: PMC6519928 DOI: 10.1016/j.patcog.2018.05.026] [Citation(s) in RCA: 75] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
A computer-aided detection (CAD) tool for locating and detecting polyps can help reduce the chance of missing polyps during colonoscopy. Nevertheless, state-of-the-art algorithms were either computationally complex or suffered from low sensitivity and therefore unsuitable to be used in real clinical setting. In this paper, a novel regression-based Convolutional Neural Network (CNN) pipeline is presented for polyp detection during colonoscopy. The proposed pipeline was constructed in two parts: 1) to learn the spatial features of colorectal polyps, a fast object detection algorithm named ResYOLO was pre-trained with a large non-medical image database and further fine-tuned with colonoscopic images extracted from videos; and 2) temporal information was incorporated via a tracker named Efficient Convolution Operators (ECO) for refining the detection results given by ResYOLO. Evaluated on 17,574 frames extracted from 18 endoscopic videos of the AsuMayoDB, the proposed method was able to detect frames with polyps with a precision of 88.6%, recall of 71.6% and processing speed of 6.5 frames per second, i.e. the method can accurately locate polyps in more frames and at a faster speed compared to existing methods. In conclusion, the proposed method has great potential to be used to assist endoscopists in tracking polyps during colonoscopy.
Collapse
Affiliation(s)
- Ruikai Zhang
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
| | - Yali Zheng
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
| | - Carmen C.Y. Poon
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
- Corresponding author
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Corresponding author at: Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, USA.
| | - James Y.W. Lau
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
32
|
Liu D, Rao N, Mei X, Jiang H, Li Q, Luo C, Li Q, Zeng C, Zeng B, Gan T. Annotating Early Esophageal Cancers Based on Two Saliency Levels of Gastroscopic Images. J Med Syst 2018; 42:237. [PMID: 30327890 DOI: 10.1007/s10916-018-1063-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Accepted: 09/06/2018] [Indexed: 02/05/2023]
Abstract
Early diagnoses of esophageal cancer can greatly improve the survival rate of patients. At present, the lesion annotation of early esophageal cancers (EEC) in gastroscopic images is generally performed by medical personnel in a clinic. To reduce the effect of subjectivity and fatigue in manual annotation, computer-aided annotation is required. However, automated annotation of EEC lesions using images is a challenging task owing to the fine-grained variability in the appearance of EEC lesions. This study modifies the traditional EEC annotation framework and utilizes visual salient information to develop a two saliency levels-based lesion annotation (TSL-BLA) for EEC annotations on gastroscopic images. Unlike existing methods, the proposed framework has a strong ability of constraining false positive outputs. What is more, TSL-BLA is also placed an additional emphasis on the annotation of small EEC lesions. A total of 871 gastroscopic images from 231 patients were used to validate TSL-BLA. 365 of those images contain 434 EEC lesions and 506 images do not contain any lesions. 101 small lesion regions are extracted from the 434 lesions to further validate the performance of TSL-BLA. The experimental results show that the mean detection rate and Dice similarity coefficients of TSL-BLA were 97.24 and 75.15%, respectively. Compared with other state-of-the-art methods, TSL-BLA shows better performance. Moreover, it shows strong superiority when annotating small EEC lesions. It also produces fewer false positive outputs and has a fast running speed. Therefore, The proposed method has good application prospects in aiding clinical EEC diagnoses.
Collapse
Affiliation(s)
- Dingyun Liu
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - Nini Rao
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China. .,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China. .,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China.
| | - Xinming Mei
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China.,Institute of Electronic and Information Engineering of UESTC in Guangdong, Dongguan, China
| | - Hongxiu Jiang
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - Quanchi Li
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - ChengSi Luo
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - Qian Li
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - Chengshi Zeng
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, China.,Key Laboratory for NeuroInformation of the Ministry of Education, University of Electronic Science and Technology of China, Chengdu, China
| | - Bing Zeng
- School of Communication and Information Engineering, University Electronic Science and Technology of China, Chengdu, China
| | - Tao Gan
- Digestive Endoscopic Center of West China Hospital, Sichuan University, Chengdu, China.
| |
Collapse
|
33
|
Sánchez-González A, García-Zapirain B, Sierra-Sosa D, Elmaghraby A. Automatized colon polyp segmentation via contour region analysis. Comput Biol Med 2018; 100:152-164. [DOI: 10.1016/j.compbiomed.2018.07.002] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 07/03/2018] [Accepted: 07/04/2018] [Indexed: 12/13/2022]
|
34
|
Shin Y, Balasingham I. Automatic polyp frame screening using patch based combined feature and dictionary learning. Comput Med Imaging Graph 2018; 69:33-42. [PMID: 30172091 DOI: 10.1016/j.compmedimag.2018.08.001] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Revised: 07/29/2018] [Accepted: 08/13/2018] [Indexed: 12/15/2022]
Abstract
Polyps in the colon can potentially become malignant cancer tissues where early detection and removal lead to high survival rate. Certain types of polyps can be difficult to detect even for highly trained physicians. Inspired by aforementioned problem our study aims to improve the human detection performance by developing an automatic polyp screening framework as a decision support tool. We use a small image patch based combined feature method. Features include shape and color information and are extracted using histogram of oriented gradient and hue histogram methods. Dictionary learning based training is used to learn features and final feature vector is formed using sparse coding. For classification, we use patch image classification based on linear support vector machine and whole image thresholding. The proposed framework is evaluated using three public polyp databases. Our experimental results show that the proposed scheme successfully classified polyps and normal images with over 95% of classification accuracy, sensitivity, specificity and precision. In addition, we compare performance of the proposed scheme with conventional feature based methods and the convolutional neural network (CNN) based deep learning approach which is the state of the art technique in many image classification applications.
Collapse
Affiliation(s)
- Younghak Shin
- Department Electronic Systems at Norwegian University of Science and Technology (NTNU), Trondheim, Norway.
| | - Ilangko Balasingham
- Intervention Centre, Oslo University Hospital, Oslo NO-0027, Norway; Institute of Clinical Medicine, University of Oslo, and the Norwegian University of Science and Technology (NTNU), Norway.
| |
Collapse
|
35
|
Balasingham I. Comparison of hand-craft feature based SVM and CNN based deep learning framework for automatic polyp classification. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2017:3277-3280. [PMID: 29060597 DOI: 10.1109/embc.2017.8037556] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Colonoscopy is a standard method for screening polyps by highly trained physicians. Miss-detected polyps in colonoscopy are potential risk factor for colorectal cancer. In this study, we investigate an automatic polyp classification framework. We aim to compare two different approaches named hand-craft feature method and convolutional neural network (CNN) based deep learning method. Combined shape and color features are used for hand craft feature extraction and support vector machine (SVM) method is adopted for classification. For CNN approach, three convolution and pooling based deep learning framework is used for classification purpose. The proposed framework is evaluated using three public polyp databases. From the experimental results, we have shown that the CNN based deep learning framework shows better classification performance than the hand-craft feature based methods. It achieves over 90% of classification accuracy, sensitivity, specificity and precision.
Collapse
|
36
|
Yuan Y, Li D, Meng MQH. Automatic Polyp Detection via a Novel Unified Bottom-Up and Top-Down Saliency Approach. IEEE J Biomed Health Inform 2018; 22:1250-1260. [DOI: 10.1109/jbhi.2017.2734329] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
37
|
Yuan Y, Yao X, Han J, Guo L, Meng MQH. Discriminative Joint-Feature Topic Model With Dual Constraints for WCE Classification. IEEE TRANSACTIONS ON CYBERNETICS 2018; 48:2074-2085. [PMID: 28749365 DOI: 10.1109/tcyb.2017.2726818] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Wireless capsule endoscopy (WCE) enables clinicians to examine the digestive tract without any surgical operations, at the cost of a large amount of images to be analyzed. The main challenge for automatic computer-aided diagnosis arises from the difficulty of robust characterization of these images. To tackle this problem, a novel discriminative joint-feature topic model (DJTM) with dual constraints is proposed to classify multiple abnormalities in WCE images. We first propose a joint-feature probabilistic latent semantic analysis (PLSA) model, where color and texture descriptors extracted from same image patches are jointly modeled with their conditional distributions. Then the proposed dual constraints: visual words importance and local image manifold are embedded into the joint-feature PLSA model simultaneously to obtain discriminative latent semantic topics. The visual word importance is proposed in our DJTM to guarantee that visual words with similar importance come from close latent topics while the local image manifold constraint enforces that images within the same category share similar latent topics. Finally, each image is characterized by distribution of latent semantic topics instead of low level features. Our proposed DJTM showed an excellent overall recognition accuracy 90.78%. Comprehensive comparison results demonstrate that our method outperforms existing multiple abnormalities classification methods for WCE images.
Collapse
|
38
|
A novel summary report of colonoscopy: timeline visualization providing meaningful colonoscopy video information. Int J Colorectal Dis 2018. [PMID: 29520455 DOI: 10.1007/s00384-018-2980-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
PURPOSE The colonoscopy adenoma detection rate depends largely on physician experience and skill, and overlooked colorectal adenomas could develop into cancer. This study assessed a system that detects polyps and summarizes meaningful information from colonoscopy videos. METHODS One hundred thirteen consecutive patients had colonoscopy videos prospectively recorded at the Seoul National University Hospital. Informative video frames were extracted using a MATLAB support vector machine (SVM) model and classified as bleeding, polypectomy, tool, residue, thin wrinkle, folded wrinkle, or common. Thin wrinkle, folded wrinkle, and common frames were reanalyzed using SVM for polyp detection. The SVM model was applied hierarchically for effective classification and optimization of the SVM. RESULTS The mean classification accuracy according to type was over 93%; sensitivity was over 87%. The mean sensitivity for polyp detection was 82.1%, and the positive predicted value (PPV) was 39.3%. Polyps detected using the system were larger (6.3 ± 6.4 vs. 4.9 ± 2.5 mm; P = 0.003) with a more pedunculated morphology (Yamada type III, 10.2 vs. 0%; P < 0.001; Yamada type IV, 2.8 vs. 0%; P < 0.001) than polyps missed by the system. There were no statistically significant differences in polyp distribution or histology between the groups. Informative frames and suspected polyps were presented on a timeline. This summary was evaluated using the system usability scale questionnaire; 89.3% of participants expressed positive opinions. CONCLUSIONS We developed and verified a system to extract meaningful information from colonoscopy videos. Although further improvement and validation of the system is needed, the proposed system is useful for physicians and patients.
Collapse
|
39
|
Bernal J, Tajkbaksh N, Sanchez FJ, Matuszewski BJ, Angermann Q, Romain O, Rustad B, Balasingham I, Pogorelov K, Debard Q, Maier-Hein L, Speidel S, Stoyanov D, Brandao P, Cordova H, Sanchez-Montes C, Gurudu SR, Fernandez-Esparrach G, Dray X, Histace A. Comparative Validation of Polyp Detection Methods in Video Colonoscopy: Results From the MICCAI 2015 Endoscopic Vision Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1231-1249. [PMID: 28182555 DOI: 10.1109/tmi.2017.2664042] [Citation(s) in RCA: 174] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Colonoscopy is the gold standard for colon cancer screening though some polyps are still missed, thus preventing early disease detection and treatment. Several computational systems have been proposed to assist polyp detection during colonoscopy but so far without consistent evaluation. The lack of publicly available annotated databases has made it difficult to compare methods and to assess if they achieve performance levels acceptable for clinical use. The Automatic Polyp Detection sub-challenge, conducted as part of the Endoscopic Vision Challenge (http://endovis.grand-challenge.org) at the international conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2015, was an effort to address this need. In this paper, we report the results of this comparative evaluation of polyp detection methods, as well as describe additional experiments to further explore differences between methods. We define performance metrics and provide evaluation databases that allow comparison of multiple methodologies. Results show that convolutional neural networks are the state of the art. Nevertheless, it is also demonstrated that combining different methodologies can lead to an improved overall performance.
Collapse
|
40
|
Yuan Y, Meng MQH. Deep learning for polyp recognition in wireless capsule endoscopy images. Med Phys 2017; 44:1379-1389. [PMID: 28160514 DOI: 10.1002/mp.12147] [Citation(s) in RCA: 87] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2016] [Revised: 01/19/2017] [Accepted: 01/24/2017] [Indexed: 12/12/2022] Open
Abstract
PURPOSE Wireless capsule endoscopy (WCE) enables physicians to examine the digestive tract without any surgical operations, at the cost of a large volume of images to be analyzed. In the computer-aided diagnosis of WCE images, the main challenge arises from the difficulty of robust characterization of images. This study aims to provide discriminative description of WCE images and assist physicians to recognize polyp images automatically. METHODS We propose a novel deep feature learning method, named stacked sparse autoencoder with image manifold constraint (SSAEIM), to recognize polyps in the WCE images. Our SSAEIM differs from the traditional sparse autoencoder (SAE) by introducing an image manifold constraint, which is constructed by a nearest neighbor graph and represents intrinsic structures of images. The image manifold constraint enforces that images within the same category share similar learned features and images in different categories should be kept far away. Thus, the learned features preserve large intervariances and small intravariances among images. RESULTS The average overall recognition accuracy (ORA) of our method for WCE images is 98.00%. The accuracies for polyps, bubbles, turbid images, and clear images are 98.00%, 99.50%, 99.00%, and 95.50%, respectively. Moreover, the comparison results show that our SSAEIM outperforms existing polyp recognition methods with relative higher ORA. CONCLUSION The comprehensive results have demonstrated that the proposed SSAEIM can provide descriptive characterization for WCE images and recognize polyps in a WCE video accurately. This method could be further utilized in the clinical trials to help physicians from the tedious image reading work.
Collapse
Affiliation(s)
- Yixuan Yuan
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Max Q-H Meng
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
41
|
Chen H, Wu X, Tao G, Peng Q. Automatic content understanding with cascaded spatial–temporal deep framework for capsule endoscopy videos. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.06.077] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
42
|
Abstract
Video capsule endoscopy (VCE) is used widely nowadays for visualizing the gastrointestinal (GI) tract. Capsule endoscopy exams are prescribed usually as an additional monitoring mechanism and can help in identifying polyps, bleeding, etc. To analyze the large scale video data produced by VCE exams, automatic image processing, computer vision, and learning algorithms are required. Recently, automatic polyp detection algorithms have been proposed with various degrees of success. Though polyp detection in colonoscopy and other traditional endoscopy procedure based images is becoming a mature field, due to its unique imaging characteristics, detecting polyps automatically in VCE is a hard problem. We review different polyp detection approaches for VCE imagery and provide systematic analysis with challenges faced by standard image processing and computer vision methods.
Collapse
Affiliation(s)
- V. Prasath
- Computational Imaging and VisAnalysis (CIVA) Lab, Department of Computer Science, University of Missouri-Columbia, Columbia, MO 65211, USA
| |
Collapse
|
43
|
Zhang R, Zheng Y, Mak TWC, Yu R, Wong SH, Lau JYW, Poon CCY. Automatic Detection and Classification of Colorectal Polyps by Transferring Low-Level CNN Features From Nonmedical Domain. IEEE J Biomed Health Inform 2016; 21:41-47. [PMID: 28114040 DOI: 10.1109/jbhi.2016.2635662] [Citation(s) in RCA: 164] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
Colorectal cancer (CRC) is a leading cause of cancer deaths worldwide. Although polypectomy at early stage reduces CRC incidence, 90% of the polyps are small and diminutive, where removal of them poses risks to patients that may outweigh the benefits. Correctly detecting and predicting polyp type during colonoscopy allows endoscopists to resect and discard the tissue without submitting it for histology, saving time, and costs. Nevertheless, human visual observation of early stage polyps varies. Therefore, this paper aims at developing a fully automatic algorithm to detect and classify hyperplastic and adenomatous colorectal polyps. Adenomatous polyps should be removed, whereas distal diminutive hyperplastic polyps are considered clinically insignificant and may be left in situ . A novel transfer learning application is proposed utilizing features learned from big nonmedical datasets with 1.4-2.5 million images using deep convolutional neural network. The endoscopic images we collected for experiment were taken under random lighting conditions, zooming and optical magnification, including 1104 endoscopic nonpolyp images taken under both white-light and narrowband imaging (NBI) endoscopy and 826 NBI endoscopic polyp images, of which 263 images were hyperplasia and 563 were adenoma as confirmed by histology. The proposed method identified polyp images from nonpolyp images in the beginning followed by predicting the polyp histology. When compared with visual inspection by endoscopists, the results of this study show that the proposed method has similar precision (87.3% versus 86.4%) but a higher recall rate (87.6% versus 77.0%) and a higher accuracy (85.9% versus 74.3%). In conclusion, automatic algorithms can assist endoscopists in identifying polyps that are adenomatous but have been incorrectly judged as hyperplasia and, therefore, enable timely resection of these polyps at an early stage before they develop into invasive cancer.
Collapse
|
44
|
Wu X, Chen H, Gan T, Chen J, Ngo CW, Peng Q. Automatic Hookworm Detection in Wireless Capsule Endoscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1741-1752. [PMID: 26886971 DOI: 10.1109/tmi.2016.2527736] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Wireless capsule endoscopy (WCE) has become a widely used diagnostic technique to examine inflammatory bowel diseases and disorders. As one of the most common human helminths, hookworm is a kind of small tubular structure with grayish white or pinkish semi-transparent body, which is with a number of 600 million people infection around the world. Automatic hookworm detection is a challenging task due to poor quality of images, presence of extraneous matters, complex structure of gastrointestinal, and diverse appearances in terms of color and texture. This is the first few works to comprehensively explore the automatic hookworm detection for WCE images. To capture the properties of hookworms, the multi scale dual matched filter is first applied to detect the location of tubular structure. Piecewise parallel region detection method is then proposed to identify the potential regions having hookworm bodies. To discriminate the unique visual features for different components of gastrointestinal, the histogram of average intensity is proposed to represent their properties. In order to deal with the problem of imbalance data, Rusboost is deployed to classify WCE images. Experiments on a diverse and large scale dataset with 440 K WCE images demonstrate that the proposed approach achieves a promising performance and outperforms the state-of-the-art methods. Moreover, the high sensitivity in detecting hookworms indicates the potential of our approach for future clinical application.
Collapse
|
45
|
Tajbakhsh N, Gurudu SR, Liang J. Automated Polyp Detection in Colonoscopy Videos Using Shape and Context Information. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:630-44. [PMID: 26462083 DOI: 10.1109/tmi.2015.2487997] [Citation(s) in RCA: 266] [Impact Index Per Article: 29.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
This paper presents the culmination of our research in designing a system for computer-aided detection (CAD) of polyps in colonoscopy videos. Our system is based on a hybrid context-shape approach, which utilizes context information to remove non-polyp structures and shape information to reliably localize polyps. Specifically, given a colonoscopy image, we first obtain a crude edge map. Second, we remove non-polyp edges from the edge map using our unique feature extraction and edge classification scheme. Third, we localize polyp candidates with probabilistic confidence scores in the refined edge maps using our novel voting scheme. The suggested CAD system has been tested using two public polyp databases, CVC-ColonDB, containing 300 colonoscopy images with a total of 300 polyp instances from 15 unique polyps, and ASU-Mayo database, which is our collection of colonoscopy videos containing 19,400 frames and a total of 5,200 polyp instances from 10 unique polyps. We have evaluated our system using free-response receiver operating characteristic (FROC) analysis. At 0.1 false positives per frame, our system achieves a sensitivity of 88.0% for CVC-ColonDB and a sensitivity of 48% for the ASU-Mayo database. In addition, we have evaluated our system using a new detection latency analysis where latency is defined as the time from the first appearance of a polyp in the colonoscopy video to the time of its first detection by our system. At 0.05 false positives per frame, our system yields a polyp detection latency of 0.3 seconds.
Collapse
|