1
|
Wu F, Lin X, Chen Y, Ge M, Pan T, Shi J, Mao L, Pan G, Peng Y, Zhou L, Zheng H, Luo D, Zhang Y. Breaking barriers: noninvasive AI model for BRAF V600E mutation identification. Int J Comput Assist Radiol Surg 2025; 20:935-947. [PMID: 39955452 DOI: 10.1007/s11548-024-03290-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Accepted: 11/06/2024] [Indexed: 02/17/2025]
Abstract
OBJECTIVE BRAFV600E is the most common mutation found in thyroid cancer and is particularly associated with papillary thyroid carcinoma (PTC). Currently, genetic mutation detection relies on invasive procedures. This study aimed to extract radiomic features and utilize deep transfer learning (DTL) from ultrasound images to develop a noninvasive artificial intelligence model for identifying BRAFV600E mutations. MATERIALS AND METHODS Regions of interest (ROI) were manually annotated in the ultrasound images, and radiomic and DTL features were extracted. These were used in a joint DTL-radiomics (DTLR) model. Fourteen DTL models were employed, and feature selection was performed using the LASSO regression. Eight machine learning methods were used to construct predictive models. Model performance was primarily evaluated using area under the curve (AUC), accuracy, sensitivity and specificity. The interpretability of the model was visualized using gradient-weighted class activation maps (Grad-CAM). RESULTS Sole reliance on radiomics for identification of BRAFV600E mutations had limited capability, but the optimal DTLR model, combined with ResNet152, effectively identified BRAFV600E mutations. In the validation set, the AUC, accuracy, sensitivity and specificity were 0.833, 80.6%, 76.2% and 81.7%, respectively. The AUC of the DTLR model was higher than that of the DTL and radiomics models. Visualization using the ResNet152-based DTLR model revealed its ability to capture and learn ultrasound image features related to BRAFV600E mutations. CONCLUSION The ResNet152-based DTLR model demonstrated significant value in identifying BRAFV600E mutations in patients with PTC using ultrasound images. Grad-CAM has the potential to objectively stratify BRAF mutations visually. The findings of this study require further collaboration among more centers and the inclusion of additional data for validation.
Collapse
Affiliation(s)
- Fan Wu
- Department of Oncological Surgery, Affiliated Hangzhou First People's Hospital, Westlake University School of Medicine, Hangzhou, 310006, Zhejiang, China
| | - Xiangfeng Lin
- Department of Thyroid Surgery, The Affiliated Yantai Yuhuangding Hospital, Qingdao University, Qingdao, Shandong Province, China
| | - Yuying Chen
- The Fourth Clinical Medical College, Zhejiang Chinese Medical University, Hangzhou, 310053, Zhejiang, China
| | - Mengqian Ge
- The Fourth Clinical Medical College, Zhejiang Chinese Medical University, Hangzhou, 310053, Zhejiang, China
| | - Ting Pan
- Department of Pathology, Zhejiang Province People's Hospital, Hangzhou, 310014, Zhejiang, China
| | - Jingjing Shi
- Department of Oncological Surgery, Affiliated Hangzhou First People's Hospital, Westlake University School of Medicine, Hangzhou, 310006, Zhejiang, China
| | - Linlin Mao
- Department of Oncological Surgery, Affiliated Hangzhou First People's Hospital, Westlake University School of Medicine, Hangzhou, 310006, Zhejiang, China
| | - Gang Pan
- Department of Oncological Surgery, Affiliated Hangzhou First People's Hospital, Westlake University School of Medicine, Hangzhou, 310006, Zhejiang, China
| | - You Peng
- Department of Oncological Surgery, Affiliated Hangzhou First People's Hospital, Westlake University School of Medicine, Hangzhou, 310006, Zhejiang, China
| | - Li Zhou
- Department of Oncological Surgery, Affiliated Hangzhou First People's Hospital, Westlake University School of Medicine, Hangzhou, 310006, Zhejiang, China
| | - Haitao Zheng
- Department of Thyroid Surgery, The Affiliated Yantai Yuhuangding Hospital, Qingdao University, Qingdao, Shandong Province, China.
| | - Dingcun Luo
- Department of Oncological Surgery, Affiliated Hangzhou First People's Hospital, Westlake University School of Medicine, Hangzhou, 310006, Zhejiang, China.
| | - Yu Zhang
- Department of Oncological Surgery, Affiliated Hangzhou First People's Hospital, Westlake University School of Medicine, Hangzhou, 310006, Zhejiang, China.
| |
Collapse
|
2
|
Liu J, Cai X, Niranjan M. Medical image classification by incorporating clinical variables and learned features. ROYAL SOCIETY OPEN SCIENCE 2025; 12:241222. [PMID: 40078919 PMCID: PMC11897822 DOI: 10.1098/rsos.241222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/19/2024] [Revised: 10/28/2024] [Accepted: 12/15/2024] [Indexed: 03/14/2025]
Abstract
Medical image classification plays an important role in medical imaging. In this work, we present a novel approach to enhance deep learning models in medical image classification by incorporating clinical variables without overwhelming the information. Unlike most existing deep neural network models that only consider single-pixel information, our method captures a more comprehensive view. Our method contains two main steps and is effective in tackling the extra challenge raised by the scarcity of medical data. Firstly, we employ a pre-trained deep neural network served as a feature extractor to capture meaningful image features. Then, an exquisite discriminant analysis is applied to reduce the dimensionality of these features, ensuring that the low number of features remains optimized for the classification task and striking a balance with the clinical variables information. We also develop a way of obtaining class activation maps for our approach in visualizing models' focus on specific regions within the low-dimensional feature space. Thorough experimental results demonstrate improvements of our proposed method over state-of-the-art methods for tuberculosis and dermatology issues for example. Furthermore, a comprehensive comparison with a popular dimensionality reduction technique (principal component analysis) is also conducted.
Collapse
Affiliation(s)
- Jiahui Liu
- School of Electronics and Computer Science, University of Southampton, Southampton, UK
| | - Xiaohao Cai
- School of Electronics and Computer Science, University of Southampton, Southampton, UK
| | - Mahesan Niranjan
- School of Electronics and Computer Science, University of Southampton, Southampton, UK
| |
Collapse
|
3
|
Jiang Q, Yu Y, Ren Y, Li S, He X. A review of deep learning methods for gastrointestinal diseases classification applied in computer-aided diagnosis system. Med Biol Eng Comput 2025; 63:293-320. [PMID: 39343842 DOI: 10.1007/s11517-024-03203-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 09/12/2024] [Indexed: 10/01/2024]
Abstract
Recent advancements in deep learning have significantly improved the intelligent classification of gastrointestinal (GI) diseases, particularly in aiding clinical diagnosis. This paper seeks to review a computer-aided diagnosis (CAD) system for GI diseases, aligning with the actual clinical diagnostic process. It offers a comprehensive survey of deep learning (DL) techniques tailored for classifying GI diseases, addressing challenges inherent in complex scenes, clinical constraints, and technical obstacles encountered in GI imaging. Firstly, the esophagus, stomach, small intestine, and large intestine were located to determine the organs where the lesions were located. Secondly, location detection and classification of a single disease are performed on the premise that the organ's location corresponding to the image is known. Finally, comprehensive classification for multiple diseases is carried out. The results of single and multi-classification are compared to achieve more accurate classification outcomes, and a more effective computer-aided diagnosis system for gastrointestinal diseases was further constructed.
Collapse
Affiliation(s)
- Qianru Jiang
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China
| | - Yulin Yu
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China
| | - Yipei Ren
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China
| | - Sheng Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China
| | - Xiongxiong He
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, Zhejiang, P.R. China.
| |
Collapse
|
4
|
Cambay VY, Barua PD, Hafeez Baig A, Dogan S, Baygin M, Tuncer T, Acharya UR. Automated Detection of Gastrointestinal Diseases Using Resnet50*-Based Explainable Deep Feature Engineering Model with Endoscopy Images. SENSORS (BASEL, SWITZERLAND) 2024; 24:7710. [PMID: 39686247 DOI: 10.3390/s24237710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2024] [Revised: 11/20/2024] [Accepted: 11/28/2024] [Indexed: 12/18/2024]
Abstract
This work aims to develop a novel convolutional neural network (CNN) named ResNet50* to detect various gastrointestinal diseases using a new ResNet50*-based deep feature engineering model with endoscopy images. The novelty of this work is the development of ResNet50*, a new variant of the ResNet model, featuring convolution-based residual blocks and a pooling-based attention mechanism similar to PoolFormer. Using ResNet50*, a gastrointestinal image dataset was trained, and an explainable deep feature engineering (DFE) model was developed. This DFE model comprises four primary stages: (i) feature extraction, (ii) iterative feature selection, (iii) classification using shallow classifiers, and (iv) information fusion. The DFE model is self-organizing, producing 14 different outcomes (8 classifier-specific and 6 voted) and selecting the most effective result as the final decision. During feature extraction, heatmaps are identified using gradient-weighted class activation mapping (Grad-CAM) with features derived from these regions via the final global average pooling layer of the pretrained ResNet50*. Four iterative feature selectors are employed in the feature selection stage to obtain distinct feature vectors. The classifiers k-nearest neighbors (kNN) and support vector machine (SVM) are used to produce specific outcomes. Iterative majority voting is employed in the final stage to obtain voted outcomes using the top result determined by the greedy algorithm based on classification accuracy. The presented ResNet50* was trained on an augmented version of the Kvasir dataset, and its performance was tested using Kvasir, Kvasir version 2, and wireless capsule endoscopy (WCE) curated colon disease image datasets. Our proposed ResNet50* model demonstrated a classification accuracy of more than 92% for all three datasets and a remarkable 99.13% accuracy for the WCE dataset. These findings affirm the superior classification ability of the ResNet50* model and confirm the generalizability of the developed architecture, showing consistent performance across all three distinct datasets.
Collapse
Affiliation(s)
- Veysel Yusuf Cambay
- Department of Digital Forensics Engineering, Technology Faculty, Firat University, Elazig 23119, Türkiye
- Department of Electrical and Electronics Engineering, Faculty of Engineering and Architecture, Mus Alparslan University, Mus 49250, Türkiye
| | - Prabal Datta Barua
- School of Business (Information System), University of Southern Queensland, Toowoomba, QLD 4350, Australia
| | - Abdul Hafeez Baig
- School of Management and Enterprise, University of Southern Queensland, Toowoomba, QLD 4350, Australia
| | - Sengul Dogan
- Department of Digital Forensics Engineering, Technology Faculty, Firat University, Elazig 23119, Türkiye
| | - Mehmet Baygin
- Department of Computer Engineering, Faculty of Engineering and Architecture, Erzurum Technical University, Erzurum 25500, Türkiye
| | - Turker Tuncer
- Department of Digital Forensics Engineering, Technology Faculty, Firat University, Elazig 23119, Türkiye
| | - U R Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, QLD 4300, Australia
| |
Collapse
|
5
|
Liu J, Fan K, Cai X, Niranjan M. Few-shot learning for inference in medical imaging with subspace feature representations. PLoS One 2024; 19:e0309368. [PMID: 39504337 PMCID: PMC11540231 DOI: 10.1371/journal.pone.0309368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 08/11/2024] [Indexed: 11/08/2024] Open
Abstract
Unlike in the field of visual scene recognition, where tremendous advances have taken place due to the availability of very large datasets to train deep neural networks, inference from medical images is often hampered by the fact that only small amounts of data may be available. When working with very small dataset problems, of the order of a few hundred items of data, the power of deep learning may still be exploited by using a pre-trained model as a feature extractor and carrying out classic pattern recognition techniques in this feature space, the so-called few-shot learning problem. However, medical images are highly complex and variable, making it difficult for few-shot learning to fully capture and model these features. To address these issues, we focus on the intrinsic characteristics of the data. We find that, in regimes where the dimension of the feature space is comparable to or even larger than the number of images in the data, dimensionality reduction is a necessity and is often achieved by principal component analysis or singular value decomposition (PCA/SVD). In this paper, noting the inappropriateness of using SVD for this setting we explore two alternatives based on discriminant analysis (DA) and non-negative matrix factorization (NMF). Using 14 different datasets spanning 11 distinct disease types we demonstrate that at low dimensions, discriminant subspaces achieve significant improvements over SVD-based subspaces and the original feature space. We also show that at modest dimensions, NMF is a competitive alternative to SVD in this setting. The implementation of the proposed method is accessible via the following link.
Collapse
Affiliation(s)
- Jiahui Liu
- School of Electronics and Computer Science, University of Southampton, Southampton, United Kingdom
| | - Keqiang Fan
- School of Electronics and Computer Science, University of Southampton, Southampton, United Kingdom
| | - Xiaohao Cai
- School of Electronics and Computer Science, University of Southampton, Southampton, United Kingdom
| | - Mahesan Niranjan
- School of Electronics and Computer Science, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
6
|
Garbaz A, Oukdach Y, Charfi S, El Ansari M, Koutti L, Salihoun M. MLFA-UNet: A multi-level feature assembly UNet for medical image segmentation. Methods 2024; 232:52-64. [PMID: 39481818 DOI: 10.1016/j.ymeth.2024.10.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Revised: 10/03/2024] [Accepted: 10/22/2024] [Indexed: 11/03/2024] Open
Abstract
Medical image segmentation is crucial for accurate diagnosis and treatment in medical image analysis. Among the various methods employed, fully convolutional networks (FCNs) have emerged as a prominent approach for segmenting medical images. Notably, the U-Net architecture and its variants have gained widespread adoption in this domain. This paper introduces MLFA-UNet, an innovative architectural framework aimed at advancing medical image segmentation. MLFA-UNet adopts a U-shaped architecture and integrates two pivotal modules: multi-level feature assembly (MLFA) and multi-scale information attention (MSIA), complemented by a pixel-vanishing (PV) attention mechanism. These modules synergistically contribute to the segmentation process enhancement, fostering both robustness and segmentation precision. MLFA operates within both the network encoder and decoder, facilitating the extraction of local information crucial for accurately segmenting lesions. Furthermore, the bottleneck MSIA module serves to replace stacking modules, thereby expanding the receptive field and augmenting feature diversity, fortified by the PV attention mechanism. These integrated mechanisms work together to boost segmentation performance by effectively capturing both detailed local features and a broader range of contextual information, enhancing both accuracy and resilience in identifying lesions. To assess the versatility of the network, we conducted evaluations of MFLA-UNet across a range of medical image segmentation datasets, encompassing diverse imaging modalities such as wireless capsule endoscopy (WCE), colonoscopy, and dermoscopic images. Our results consistently demonstrate that MFLA-UNet outperforms state-of-the-art algorithms, achieving dice coefficients of 91.42%, 82.43%, 90.8%, and 88.68% for the MICCAI 2017 (Red Lesion), ISIC 2017, PH2, and CVC-ClinicalDB datasets, respectively.
Collapse
Affiliation(s)
- Anass Garbaz
- Laboratory of Computer Systems and Vision, Faculty of Science, Ibn Zohr University, Agadir, 80000, Morocco.
| | - Yassine Oukdach
- Laboratory of Computer Systems and Vision, Faculty of Science, Ibn Zohr University, Agadir, 80000, Morocco
| | - Said Charfi
- Laboratory of Computer Systems and Vision, Faculty of Science, Ibn Zohr University, Agadir, 80000, Morocco
| | - Mohamed El Ansari
- Informatics and Applications Laboratory, Department of Computer Science Faculty of sciences, Moulay Ismail University, Meknes, 50000, Morocco
| | - Lahcen Koutti
- Laboratory of Computer Systems and Vision, Faculty of Science, Ibn Zohr University, Agadir, 80000, Morocco
| | - Mouna Salihoun
- Faculty of Medicine and Pharmacy, Mohammed V University, Rabat, 10100, Morocco
| |
Collapse
|
7
|
Waheed Z, Gui J. An optimized ensemble model bfased on cuckoo search with Levy Flight for automated gastrointestinal disease detection. MULTIMEDIA TOOLS AND APPLICATIONS 2024; 83:89695-89722. [DOI: 10.1007/s11042-024-18937-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 01/04/2024] [Accepted: 03/13/2024] [Indexed: 01/15/2025]
|
8
|
Oukdach Y, Kerkaou Z, El Ansari M, Koutti L, Fouad El Ouafdi A, De Lange T. ViTCA-Net: a framework for disease detection in video capsule endoscopy images using a vision transformer and convolutional neural network with a specific attention mechanism. MULTIMEDIA TOOLS AND APPLICATIONS 2024; 83:63635-63654. [DOI: 10.1007/s11042-023-18039-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 09/27/2023] [Accepted: 12/26/2023] [Indexed: 02/10/2025]
|
9
|
Bordbar M, Helfroush MS, Danyali H, Ejtehadi F. Wireless capsule endoscopy multiclass classification using three-dimensional deep convolutional neural network model. Biomed Eng Online 2023; 22:124. [PMID: 38098015 PMCID: PMC10722702 DOI: 10.1186/s12938-023-01186-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 11/29/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND Wireless capsule endoscopy (WCE) is a patient-friendly and non-invasive technology that scans the whole of the gastrointestinal tract, including difficult-to-access regions like the small bowel. Major drawback of this technology is that the visual inspection of a large number of video frames produced during each examination makes the physician diagnosis process tedious and prone to error. Several computer-aided diagnosis (CAD) systems, such as deep network models, have been developed for the automatic recognition of abnormalities in WCE frames. Nevertheless, most of these studies have only focused on spatial information within individual WCE frames, missing the crucial temporal data within consecutive frames. METHODS In this article, an automatic multiclass classification system based on a three-dimensional deep convolutional neural network (3D-CNN) is proposed, which utilizes the spatiotemporal information to facilitate the WCE diagnosis process. The 3D-CNN model fed with a series of sequential WCE frames in contrast to the two-dimensional (2D) model, which exploits frames as independent ones. Moreover, the proposed 3D deep model is compared with some pre-trained networks. The proposed models are trained and evaluated with 29 subject WCE videos (14,691 frames before augmentation). The performance advantages of 3D-CNN over 2D-CNN and pre-trained networks are verified in terms of sensitivity, specificity, and accuracy. RESULTS 3D-CNN outperforms the 2D technique in all evaluation metrics (sensitivity: 98.92 vs. 98.05, specificity: 99.50 vs. 86.94, accuracy: 99.20 vs. 92.60). In conclusion, a novel 3D-CNN model for lesion detection in WCE frames is proposed in this study. CONCLUSION The results indicate the performance of 3D-CNN over 2D-CNN and some well-known pre-trained classifier networks. The proposed 3D-CNN model uses the rich temporal information in adjacent frames as well as spatial data to develop an accurate and efficient model.
Collapse
Affiliation(s)
- Mehrdokht Bordbar
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran
| | | | - Habibollah Danyali
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran
| | - Fardad Ejtehadi
- Department of Internal Medicine, Gastroenterohepatology Research Center, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
10
|
Brzeski A, Dziubich T, Krawczyk H. Visual Features for Improving Endoscopic Bleeding Detection Using Convolutional Neural Networks. SENSORS (BASEL, SWITZERLAND) 2023; 23:9717. [PMID: 38139563 PMCID: PMC10748269 DOI: 10.3390/s23249717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/19/2023] [Accepted: 12/04/2023] [Indexed: 12/24/2023]
Abstract
The presented paper investigates the problem of endoscopic bleeding detection in endoscopic videos in the form of a binary image classification task. A set of definitions of high-level visual features of endoscopic bleeding is introduced, which incorporates domain knowledge from the field. The high-level features are coupled with respective feature descriptors, enabling automatic capture of the features using image processing methods. Each of the proposed feature descriptors outputs a feature activation map in the form of a grayscale image. Acquired feature maps can be appended in a straightforward way to the original color channels of the input image and passed to the input of a convolutional neural network during the training and inference steps. An experimental evaluation is conducted to compare the classification ROC AUC of feature-extended convolutional neural network models with baseline models using regular color image inputs. The advantage of feature-extended models is demonstrated for the Resnet and VGG convolutional neural network architectures.
Collapse
Affiliation(s)
- Adam Brzeski
- Faculty of Electronics, Telecommunications and Informatics, Gdańsk University of Technology, 80-233 Gdańsk, Poland; (T.D.); (H.K.)
| | | | | |
Collapse
|
11
|
Musha A, Hasnat R, Mamun AA, Ping EP, Ghosh T. Computer-Aided Bleeding Detection Algorithms for Capsule Endoscopy: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:7170. [PMID: 37631707 PMCID: PMC10459126 DOI: 10.3390/s23167170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 08/08/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023]
Abstract
Capsule endoscopy (CE) is a widely used medical imaging tool for the diagnosis of gastrointestinal tract abnormalities like bleeding. However, CE captures a huge number of image frames, constituting a time-consuming and tedious task for medical experts to manually inspect. To address this issue, researchers have focused on computer-aided bleeding detection systems to automatically identify bleeding in real time. This paper presents a systematic review of the available state-of-the-art computer-aided bleeding detection algorithms for capsule endoscopy. The review was carried out by searching five different repositories (Scopus, PubMed, IEEE Xplore, ACM Digital Library, and ScienceDirect) for all original publications on computer-aided bleeding detection published between 2001 and 2023. The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) methodology was used to perform the review, and 147 full texts of scientific papers were reviewed. The contributions of this paper are: (I) a taxonomy for computer-aided bleeding detection algorithms for capsule endoscopy is identified; (II) the available state-of-the-art computer-aided bleeding detection algorithms, including various color spaces (RGB, HSV, etc.), feature extraction techniques, and classifiers, are discussed; and (III) the most effective algorithms for practical use are identified. Finally, the paper is concluded by providing future direction for computer-aided bleeding detection research.
Collapse
Affiliation(s)
- Ahmmad Musha
- Department of Electrical and Electronic Engineering, Pabna University of Science and Technology, Pabna 6600, Bangladesh; (A.M.); (R.H.)
| | - Rehnuma Hasnat
- Department of Electrical and Electronic Engineering, Pabna University of Science and Technology, Pabna 6600, Bangladesh; (A.M.); (R.H.)
| | - Abdullah Al Mamun
- Faculty of Engineering and Technology, Multimedia University, Melaka 75450, Malaysia;
| | - Em Poh Ping
- Faculty of Engineering and Technology, Multimedia University, Melaka 75450, Malaysia;
| | - Tonmoy Ghosh
- Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA;
| |
Collapse
|
12
|
Zhang L, Lu Z, Yao L, Dong Z, Zhou W, He C, Luo R, Zhang M, Wang J, Li Y, Deng Y, Zhang C, Li X, Shang R, Xu M, Wang J, Zhao Y, Wu L, Yu H. Effect of a deep learning-based automatic upper GI endoscopic reporting system: a randomized crossover study (with video). Gastrointest Endosc 2023; 98:181-190.e10. [PMID: 36849056 DOI: 10.1016/j.gie.2023.02.025] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 02/18/2023] [Accepted: 02/21/2023] [Indexed: 03/01/2023]
Abstract
BACKGROUND AND AIMS EGD is essential for GI disorders, and reports are pivotal to facilitating postprocedure diagnosis and treatment. Manual report generation lacks sufficient quality and is labor intensive. We reported and validated an artificial intelligence-based endoscopy automatic reporting system (AI-EARS). METHODS The AI-EARS was designed for automatic report generation, including real-time image capturing, diagnosis, and textual description. It was developed using multicenter datasets from 8 hospitals in China, including 252,111 images for training, 62,706 images, and 950 videos for testing. Twelve endoscopists and 44 endoscopy procedures were consecutively enrolled to evaluate the effect of the AI-EARS in a multireader, multicase, crossover study. The precision and completeness of the reports were compared between endoscopists using the AI-EARS and conventional reporting systems. RESULTS In video validation, the AI-EARS achieved completeness of 98.59% and 99.69% for esophageal and gastric abnormality records, respectively, accuracies of 87.99% and 88.85% for esophageal and gastric lesion location records, and 73.14% and 85.24% for diagnosis. Compared with the conventional reporting systems, the AI-EARS achieved greater completeness (79.03% vs 51.86%, P < .001) and accuracy (64.47% vs 42.81%, P < .001) of the textual description and completeness of the photo-documents of landmarks (92.23% vs 73.69%, P < .001). The mean reporting time for an individual lesion was significantly reduced (80.13 ± 16.12 seconds vs 46.47 ± 11.68 seconds, P < .001) after the AI-EARS assistance. CONCLUSIONS The AI-EARS showed its efficacy in improving the accuracy and completeness of EGD reports. It might facilitate the generation of complete endoscopy reports and postendoscopy patient management. (Clinical trial registration number: NCT05479253.).
Collapse
Affiliation(s)
- Lihui Zhang
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Zihua Lu
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Liwen Yao
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Zehua Dong
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease
| | - Wei Zhou
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | | | - Renquan Luo
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Mengjiao Zhang
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Jing Wang
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Yanxia Li
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Yunchao Deng
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Chenxia Zhang
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xun Li
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Renduo Shang
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Ming Xu
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Junxiao Wang
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Yu Zhao
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Lianlian Wu
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| | - Honggang Yu
- Department of Gastroenterology; Key Laboratory of Hubei Province for Digestive System Disease; Hubei Provincial Clinical Research Center for Digestive Disease Minimally Invasive Incision, Renmin Hospital of Wuhan University, Wuhan, China
| |
Collapse
|
13
|
Zeng Q, Feng Z, Zhu Y, Zhang Y, Shu X, Wu A, Luo L, Cao Y, Xiong J, Li H, Zhou F, Jie Z, Tu Y, Li Z. Deep learning model for diagnosing early gastric cancer using preoperative computed tomography images. Front Oncol 2022; 12:1065934. [PMID: 36531076 PMCID: PMC9748811 DOI: 10.3389/fonc.2022.1065934] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 11/07/2022] [Indexed: 08/10/2023] Open
Abstract
BACKGROUND Early gastric cancer (EGC) is defined as a lesion restricted to the mucosa or submucosa, independent of size or evidence of regional lymph node metastases. Although computed tomography (CT) is the main technique for determining the stage of gastric cancer (GC), the accuracy of CT for determining tumor invasion of EGC was still unsatisfactory by radiologists. In this research, we attempted to construct an AI model to discriminate EGC in portal venous phase CT images. METHODS We retrospectively collected 658 GC patients from the first affiliated hospital of Nanchang university, and divided them into training and internal validation cohorts with a ratio of 8:2. As the external validation cohort, 93 GC patients were recruited from the second affiliated hospital of Soochow university. We developed several prediction models based on various convolutional neural networks, and compared their predictive performance. RESULTS The deep learning model based on the ResNet101 neural network represented sufficient discrimination of EGC. In two validation cohorts, the areas under the curves (AUCs) for the receiver operating characteristic (ROC) curves were 0.993 (95% CI: 0.984-1.000) and 0.968 (95% CI: 0.935-1.000), respectively, and the accuracy was 0.946 and 0.914. Additionally, the deep learning model can also differentiate between mucosa and submucosa tumors of EGC. CONCLUSIONS These results suggested that deep learning classifiers have the potential to be used as a screening tool for EGC, which is crucial in the individualized treatment of EGC patients.
Collapse
Affiliation(s)
- Qingwen Zeng
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
- Institute of Digestive Surgery, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, China
- Medical Innovation Center, The First Affiliated Hospital of Nanchang University, Nanchang, China
| | - Zongfeng Feng
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
- Institute of Digestive Surgery, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, China
| | - Yanyan Zhu
- Department of Radiology, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
| | - Yang Zhang
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
- Institute of Digestive Surgery, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, China
| | - Xufeng Shu
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
| | - Ahao Wu
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
| | - Lianghua Luo
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
| | - Yi Cao
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
| | - Jianbo Xiong
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
| | - Hong Li
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, Jiangsu, China
| | - Fuqing Zhou
- Department of Radiology, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
| | - Zhigang Jie
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
- Institute of Digestive Surgery, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, China
| | - Yi Tu
- Department of Pathology, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, China
| | - Zhengrong Li
- Department of Gastrointestinal Surgery, The First Affiliated Hospital, Nanchang University, Nanchang, Jiangxi, China
- Institute of Digestive Surgery, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi, China
| |
Collapse
|
14
|
Su Q, Wang F, Chen D, Chen G, Li C, Wei L. Deep convolutional neural networks with ensemble learning and transfer learning for automated detection of gastrointestinal diseases. Comput Biol Med 2022; 150:106054. [PMID: 36244302 DOI: 10.1016/j.compbiomed.2022.106054] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 08/12/2022] [Accepted: 08/27/2022] [Indexed: 11/22/2022]
Abstract
Gastrointestinal (GI) diseases are serious health threats to human health, and the related detection and treatment of gastrointestinal diseases place a huge burden on medical institutions. Imaging-based methods are one of the most important approaches for automated detection of gastrointestinal diseases. Although deep neural networks have shown impressive performance in a number of imaging tasks, its application to detection of gastrointestinal diseases has not been sufficiently explored. In this study, we propose a novel and practical method to detect gastrointestinal disease from wireless capsule endoscopy (WCE) images by convolutional neural networks. The proposed method utilizes three backbone networks modified and fine-tuned by transfer learning as the feature extractors, and an integrated classifier using ensemble learning is trained to detection of gastrointestinal diseases. The proposed method outperforms existing computational methods on the benchmark dataset. The case study results show that the proposed method captures discriminative information of wireless capsule endoscopy images. This work shows the potential of using deep learning-based computer vision models for effective GI disease screening.
Collapse
Affiliation(s)
- Qiaosen Su
- School of Software, Shandong University, Jinan, China; Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan, China
| | - Fengsheng Wang
- School of Software, Shandong University, Jinan, China; Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan, China
| | | | | | - Chao Li
- Beidahuang Industry Group General Hospital, Harbin, China.
| | - Leyi Wei
- School of Software, Shandong University, Jinan, China; Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, Jinan, China.
| |
Collapse
|
15
|
Rahman MM, Khan MSI, Babu HMH. BreastMultiNet: A multi-scale feature fusion method using deep neural network to detect breast cancer. ARRAY 2022. [DOI: 10.1016/j.array.2022.100256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
|
16
|
Ali H, Sharif M, Yasmin M, Rehmani MH. A shallow extraction of texture features for classification of abnormal video endoscopy frames. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
17
|
Qin K, Li J, Fang Y, Xu Y, Wu J, Zhang H, Li H, Liu S, Li Q. Convolution neural network for the diagnosis of wireless capsule endoscopy: a systematic review and meta-analysis. Surg Endosc 2022; 36:16-31. [PMID: 34426876 PMCID: PMC8741689 DOI: 10.1007/s00464-021-08689-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 08/07/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Wireless capsule endoscopy (WCE) is considered to be a powerful instrument for the diagnosis of intestine diseases. Convolution neural network (CNN) is a type of artificial intelligence that has the potential to assist the detection of WCE images. We aimed to perform a systematic review of the current research progress to the CNN application in WCE. METHODS A search in PubMed, SinoMed, and Web of Science was conducted to collect all original publications about CNN implementation in WCE. Assessment of the risk of bias was performed by Quality Assessment of Diagnostic Accuracy Studies-2 risk list. Pooled sensitivity and specificity were calculated by an exact binominal rendition of the bivariate mixed-effects regression model. I2 was used for the evaluation of heterogeneity. RESULTS 16 articles with 23 independent studies were included. CNN application to WCE was divided into detection on erosion/ulcer, gastrointestinal bleeding (GI bleeding), and polyps/cancer. The pooled sensitivity of CNN for erosion/ulcer is 0.96 [95% CI 0.91, 0.98], for GI bleeding is 0.97 (95% CI 0.93-0.99), and for polyps/cancer is 0.97 (95% CI 0.82-0.99). The corresponding specificity of CNN for erosion/ulcer is 0.97 (95% CI 0.93-0.99), for GI bleeding is 1.00 (95% CI 0.99-1.00), and for polyps/cancer is 0.98 (95% CI 0.92-0.99). CONCLUSION Based on our meta-analysis, CNN-dependent diagnosis of erosion/ulcer, GI bleeding, and polyps/cancer approached a high-level performance because of its high sensitivity and specificity. Therefore, future perspective, CNN has the potential to become an important assistant for the diagnosis of WCE.
Collapse
Affiliation(s)
- Kaiwen Qin
- Nanfang Hospital (The First School of Clinical Medicine), Southern Medical University, Guangzhou, Guangdong, China
| | - Jianmin Li
- Guangzhou SiDe MedTech Co.,Ltd, Guangzhou, Guangdong, China
| | - Yuxin Fang
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital, Southern Medical University, No. 1838, Guangzhou Avenue North, Guangzhou, Guangdong, China
| | - Yuyuan Xu
- State Key Laboratory of Organ Failure Research, Guangdong Provincial Key Laboratory of Viral Hepatitis Research, Department of Hepatology Unit and Infectious Diseases, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Jiahao Wu
- Guangzhou SiDe MedTech Co.,Ltd, Guangzhou, Guangdong, China
| | - Haonan Zhang
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital, Southern Medical University, No. 1838, Guangzhou Avenue North, Guangzhou, Guangdong, China
| | - Haolin Li
- Nanfang Hospital (The First School of Clinical Medicine), Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital, Southern Medical University, No. 1838, Guangzhou Avenue North, Guangzhou, Guangdong, China
| | - Side Liu
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital, Southern Medical University, No. 1838, Guangzhou Avenue North, Guangzhou, Guangdong, China
| | - Qingyuan Li
- Guangdong Provincial Key Laboratory of Gastroenterology, Department of Gastroenterology, Nanfang Hospital, Southern Medical University, No. 1838, Guangzhou Avenue North, Guangzhou, Guangdong, China.
| |
Collapse
|
18
|
Amiri Z, Hassanpour H, Beghdadi A. Feature extraction for abnormality detection in capsule endoscopy images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103219] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
19
|
Xu H, Han T, Wang H, Liu S, Hou G, Sun L, Jiang G, Yang F, Wang J, Deng K, Zhou J. OUP accepted manuscript. Eur J Cardiothorac Surg 2022; 62:6555788. [PMID: 35352106 PMCID: PMC9615432 DOI: 10.1093/ejcts/ezac154] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 02/10/2022] [Accepted: 03/11/2011] [Indexed: 11/15/2022] Open
Affiliation(s)
- Hao Xu
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
| | - Tingxuan Han
- Center for Statistical Science & Department of Industrial Engineering, Tsinghua University, Beijing, China
| | - Haifeng Wang
- Center for Statistical Science & Department of Industrial Engineering, Tsinghua University, Beijing, China
| | - Shanggui Liu
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
| | - Guanghao Hou
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
| | - Lina Sun
- Central operating Theatre, Peking University People's Hospital, Beijing, China
| | - Guanchao Jiang
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
| | - Fan Yang
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
| | - Jun Wang
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
- Corresponding authors: Department of Thoracic Surgery, Peking University People's Hospital, Beijing 100044, China. Tel: +86 010-88326650; e-mail: (Dr. Jian Zhou); Department of Thoracic Surgery, Peking University People's Hospital, Beijing 100044, China. Tel: 010-88326652; e-mail: (Dr. Jun Wang); Center for Statistical Science & Department of Industrial Engineering, Tsinghua University, Beijing 100084, China. Tel: +86 010-62782453; e-mail: (Ke Deng)
| | - Ke Deng
- Center for Statistical Science & Department of Industrial Engineering, Tsinghua University, Beijing, China
- Corresponding authors: Department of Thoracic Surgery, Peking University People's Hospital, Beijing 100044, China. Tel: +86 010-88326650; e-mail: (Dr. Jian Zhou); Department of Thoracic Surgery, Peking University People's Hospital, Beijing 100044, China. Tel: 010-88326652; e-mail: (Dr. Jun Wang); Center for Statistical Science & Department of Industrial Engineering, Tsinghua University, Beijing 100084, China. Tel: +86 010-62782453; e-mail: (Ke Deng)
| | - Jian Zhou
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
- Corresponding authors: Department of Thoracic Surgery, Peking University People's Hospital, Beijing 100044, China. Tel: +86 010-88326650; e-mail: (Dr. Jian Zhou); Department of Thoracic Surgery, Peking University People's Hospital, Beijing 100044, China. Tel: 010-88326652; e-mail: (Dr. Jun Wang); Center for Statistical Science & Department of Industrial Engineering, Tsinghua University, Beijing 100084, China. Tel: +86 010-62782453; e-mail: (Ke Deng)
| |
Collapse
|
20
|
|
21
|
Ayyaz MS, Lali MIU, Hussain M, Rauf HT, Alouffi B, Alyami H, Wasti S. Hybrid Deep Learning Model for Endoscopic Lesion Detection and Classification Using Endoscopy Videos. Diagnostics (Basel) 2021; 12:diagnostics12010043. [PMID: 35054210 PMCID: PMC8775223 DOI: 10.3390/diagnostics12010043] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 12/22/2021] [Accepted: 12/23/2021] [Indexed: 02/06/2023] Open
Abstract
In medical imaging, the detection and classification of stomach diseases are challenging due to the resemblance of different symptoms, image contrast, and complex background. Computer-aided diagnosis (CAD) plays a vital role in the medical imaging field, allowing accurate results to be obtained in minimal time. This article proposes a new hybrid method to detect and classify stomach diseases using endoscopy videos. The proposed methodology comprises seven significant steps: data acquisition, preprocessing of data, transfer learning of deep models, feature extraction, feature selection, hybridization, and classification. We selected two different CNN models (VGG19 and Alexnet) to extract features. We applied transfer learning techniques before using them as feature extractors. We used a genetic algorithm (GA) in feature selection, due to its adaptive nature. We fused selected features of both models using a serial-based approach. Finally, the best features were provided to multiple machine learning classifiers for detection and classification. The proposed approach was evaluated on a personally collected dataset of five classes, including gastritis, ulcer, esophagitis, bleeding, and healthy. We observed that the proposed technique performed superbly on Cubic SVM with 99.8% accuracy. For the authenticity of the proposed technique, we considered these statistical measures: classification accuracy, recall, precision, False Negative Rate (FNR), Area Under the Curve (AUC), and time. In addition, we provided a fair state-of-the-art comparison of our proposed technique with existing techniques that proves its worthiness.
Collapse
Affiliation(s)
- M Shahbaz Ayyaz
- Department of Computer Science, University of Gujrat, Gujrat 50700, Pakistan; (M.S.A.); (M.H.)
| | - Muhammad Ikram Ullah Lali
- Department of Information Sciences, University of Education Lahore, Lahore 41000, Pakistan; (M.I.U.L.); (S.W.)
| | - Mubbashar Hussain
- Department of Computer Science, University of Gujrat, Gujrat 50700, Pakistan; (M.S.A.); (M.H.)
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
- Correspondence:
| | - Bader Alouffi
- Department of Computer Science, College of Computers and Information Technology, Taif University, P. O. Box 11099, Taif 21944, Saudi Arabia; (B.A.); (H.A.)
| | - Hashem Alyami
- Department of Computer Science, College of Computers and Information Technology, Taif University, P. O. Box 11099, Taif 21944, Saudi Arabia; (B.A.); (H.A.)
| | - Shahbaz Wasti
- Department of Information Sciences, University of Education Lahore, Lahore 41000, Pakistan; (M.I.U.L.); (S.W.)
| |
Collapse
|
22
|
Zhuang H, Zhang J, Liao F. A systematic review on application of deep learning in digestive system image processing. THE VISUAL COMPUTER 2021; 39:2207-2222. [PMID: 34744231 PMCID: PMC8557108 DOI: 10.1007/s00371-021-02322-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/30/2021] [Indexed: 05/07/2023]
Abstract
With the advent of the big data era, the application of artificial intelligence represented by deep learning in medicine has become a hot topic. In gastroenterology, deep learning has accomplished remarkable accomplishments in endoscopy, imageology, and pathology. Artificial intelligence has been applied to benign gastrointestinal tract lesions, early cancer, tumors, inflammatory bowel diseases, livers, pancreas, and other diseases. Computer-aided diagnosis significantly improve diagnostic accuracy and reduce physicians' workload and provide a shred of evidence for clinical diagnosis and treatment. In the near future, artificial intelligence will have high application value in the field of medicine. This paper mainly summarizes the latest research on artificial intelligence in diagnosing and treating digestive system diseases and discussing artificial intelligence's future in digestive system diseases. We sincerely hope that our work can become a stepping stone for gastroenterologists and computer experts in artificial intelligence research and facilitate the application and development of computer-aided image processing technology in gastroenterology.
Collapse
Affiliation(s)
- Huangming Zhuang
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Jixiang Zhang
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Fei Liao
- Gastroenterology Department, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| |
Collapse
|
23
|
Amiri Z, Hassanpour H, Beghdadi A. A Computer-Aided Method for Digestive System Abnormality Detection in WCE Images. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:7863113. [PMID: 34707798 PMCID: PMC8545542 DOI: 10.1155/2021/7863113] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/25/2021] [Accepted: 10/06/2021] [Indexed: 12/01/2022]
Abstract
Wireless capsule endoscopy (WCE) is a powerful tool for the diagnosis of gastrointestinal diseases. The output of this tool is in video with a length of about eight hours, containing about 8000 frames. It is a difficult task for a physician to review all of the video frames. In this paper, a new abnormality detection system for WCE images is proposed. The proposed system has four main steps: (1) preprocessing, (2) region of interest (ROI) extraction, (3) feature extraction, and (4) classification. In ROI extraction, at first, distinct areas are highlighted and nondistinct areas are faded by using the joint normal distribution; then, distinct areas are extracted as an ROI segment by considering a threshold. The main idea is to extract abnormal areas in each frame. Therefore, it can be used to extract various lesions in WCE images. In the feature extraction step, three different types of features (color, texture, and shape) are employed. Finally, the features are classified using the support vector machine. The proposed system was tested on the Kvasir-Capsule dataset. The proposed system can detect multiple lesions from WCE frames with high accuracy.
Collapse
Affiliation(s)
- Zahra Amiri
- Image Processing and Data Mining Lab, Shahrood University of Technology, Shahrood, Iran
| | - Hamid Hassanpour
- Image Processing and Data Mining Lab, Shahrood University of Technology, Shahrood, Iran
| | - Azeddine Beghdadi
- Department of Computer Science and Engineering, University Sorbonne Paris Nord, Villetaneuse, France
| |
Collapse
|
24
|
Gastrointestinal Tract Disease Classification from Wireless Endoscopy Images Using Pretrained Deep Learning Model. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:5940433. [PMID: 34545292 PMCID: PMC8449743 DOI: 10.1155/2021/5940433] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Revised: 07/03/2021] [Accepted: 08/16/2021] [Indexed: 12/28/2022]
Abstract
Wireless capsule endoscopy is a noninvasive wireless imaging technology that becomes increasingly popular in recent years. One of the major drawbacks of this technology is that it generates a large number of photos that must be analyzed by medical personnel, which takes time. Various research groups have proposed different image processing and machine learning techniques to classify gastrointestinal tract diseases in recent years. Traditional image processing algorithms and a data augmentation technique are combined with an adjusted pretrained deep convolutional neural network to classify diseases in the gastrointestinal tract from wireless endoscopy images in this research. We take advantage of pretrained models VGG16, ResNet-18, and GoogLeNet, a convolutional neural network (CNN) model with adjusted fully connected and output layers. The proposed models are validated with a dataset consisting of 6702 images of 8 classes. The VGG16 model achieved the highest results with 96.33% accuracy, 96.37% recall, 96.5% precision, and 96.5% F1-measure. Compared to other state-of-the-art models, the VGG16 model has the highest Matthews Correlation Coefficient value of 0.95 and Cohen's kappa score of 0.96.
Collapse
|
25
|
Guo X, Zhang L, Hao Y, Zhang L, Liu Z, Liu J. Multiple abnormality classification in wireless capsule endoscopy images based on EfficientNet using attention mechanism. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2021; 92:094102. [PMID: 34598534 DOI: 10.1063/5.0054161] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 08/13/2021] [Indexed: 06/13/2023]
Abstract
The wireless capsule endoscopy (WCE) procedure produces tens of thousands of images of the digestive tract, for which the use of the manual reading process is full of challenges. Convolutional neural networks are used to automatically detect lesions in WCE images. However, studies on clinical multilesion detection are scarce, and it is difficult to effectively balance the sensitivity to multiple lesions. A strategy for detecting multiple lesions is proposed, wherein common vascular and inflammatory lesions can be automatically and quickly detected on capsule endoscopic images. Based on weakly supervised learning, EfficientNet is fine-tuned to extract the endoscopic image features. Combining spatial features and channel features, the proposed attention network is then used as a classifier to obtain three classifications. The accuracy and speed of the model were compared with those of the ResNet121 and InceptionNetV4 models. It was tested on a public WCE image dataset obtained from 4143 subjects. On the computer-assisted diagnosis for capsule endoscopy database, the method gives a sensitivity of 96.67% for vascular lesions and 93.33% for inflammatory lesions. The precision for vascular lesions was 92.80%, and that for inflammatory lesions was 95.73%. The accuracy was 96.11%, which is 1.11% higher than that of the latest InceptionNetV4 network. Prediction for an image only requires 14 ms, which balances the accuracy and speed comparatively better. This strategy can be used as an auxiliary diagnostic method for specialists for the rapid reading of clinical capsule endoscopes.
Collapse
Affiliation(s)
- Xudong Guo
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Lulu Zhang
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Youguo Hao
- Department of Rehabilitation, Shanghai Putuo People's Hospital, Shanghai 200060, China
| | - Linqi Zhang
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Zhang Liu
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Jiannan Liu
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| |
Collapse
|