1
|
Wan JJ, Zhu PC, Chen BL, Yu YT. A semantic feature enhanced YOLOv5-based network for polyp detection from colonoscopy images. Sci Rep 2024; 14:15478. [PMID: 38969765 PMCID: PMC11226707 DOI: 10.1038/s41598-024-66642-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 07/03/2024] [Indexed: 07/07/2024] Open
Abstract
Colorectal cancer (CRC) is a common digestive system tumor with high morbidity and mortality worldwide. At present, the use of computer-assisted colonoscopy technology to detect polyps is relatively mature, but it still faces some challenges, such as missed or false detection of polyps. Therefore, how to improve the detection rate of polyps more accurately is the key to colonoscopy. To solve this problem, this paper proposes an improved YOLOv5-based cancer polyp detection method for colorectal cancer. The method is designed with a new structure called P-C3 incorporated into the backbone and neck network of the model to enhance the expression of features. In addition, a contextual feature augmentation module was introduced to the bottom of the backbone network to increase the receptive field for multi-scale feature information and to focus on polyp features by coordinate attention mechanism. The experimental results show that compared with some traditional target detection algorithms, the model proposed in this paper has significant advantages for the detection accuracy of polyp, especially in the recall rate, which largely solves the problem of missed detection of polyps. This study will contribute to improve the polyp/adenoma detection rate of endoscopists in the process of colonoscopy, and also has important significance for the development of clinical work.
Collapse
Affiliation(s)
- Jing-Jing Wan
- Department of Gastroenterology, The Second People's Hospital of Huai'an, The Affiliated Huai'an Hospital of Xuzhou Medical University, Huaian, 223023, Jiangsu, China.
| | - Peng-Cheng Zhu
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China.
| | - Bo-Lun Chen
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China
| | - Yong-Tao Yu
- Faculty of Computer and Software Engineering, Huaiyin Institute of Technology, Huaian, 223003, China
| |
Collapse
|
2
|
Lin CY, Wu JCH, Kuan YM, Liu YC, Chang PY, Chen JP, Lu HHS, Lee OKS. Precision Identification of Locally Advanced Rectal Cancer in Denoised CT Scans Using EfficientNet and Voting System Algorithms. Bioengineering (Basel) 2024; 11:399. [PMID: 38671820 PMCID: PMC11048699 DOI: 10.3390/bioengineering11040399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 04/11/2024] [Accepted: 04/16/2024] [Indexed: 04/28/2024] Open
Abstract
BACKGROUND AND OBJECTIVE Local advanced rectal cancer (LARC) poses significant treatment challenges due to its location and high recurrence rates. Accurate early detection is vital for treatment planning. With magnetic resonance imaging (MRI) being resource-intensive, this study explores using artificial intelligence (AI) to interpret computed tomography (CT) scans as an alternative, providing a quicker, more accessible diagnostic tool for LARC. METHODS In this retrospective study, CT images of 1070 T3-4 rectal cancer patients from 2010 to 2022 were analyzed. AI models, trained on 739 cases, were validated using two test sets of 134 and 197 cases. By utilizing techniques such as nonlocal mean filtering, dynamic histogram equalization, and the EfficientNetB0 algorithm, we identified images featuring characteristics of a positive circumferential resection margin (CRM) for the diagnosis of locally advanced rectal cancer (LARC). Importantly, this study employs an innovative approach by using both hard and soft voting systems in the second stage to ascertain the LARC status of cases, thus emphasizing the novelty of the soft voting system for improved case identification accuracy. The local recurrence rates and overall survival of the cases predicted by our model were assessed to underscore its clinical value. RESULTS The AI model exhibited high accuracy in identifying CRM-positive images, achieving an area under the curve (AUC) of 0.89 in the first test set and 0.86 in the second. In a patient-based analysis, the model reached AUCs of 0.84 and 0.79 using a hard voting system. Employing a soft voting system, the model attained AUCs of 0.93 and 0.88, respectively. Notably, AI-identified LARC cases exhibited a significantly higher five-year local recurrence rate and displayed a trend towards increased mortality across various thresholds. Furthermore, the model's capability to predict adverse clinical outcomes was superior to those of traditional assessments. CONCLUSION AI can precisely identify CRM-positive LARC cases from CT images, signaling an increased local recurrence and mortality rate. Our study presents a swifter and more reliable method for detecting LARC compared to traditional CT or MRI techniques.
Collapse
Affiliation(s)
- Chun-Yu Lin
- Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei 11221, Taiwan;
- Division of Colorectal Surgery, Department of Surgery, Taichung Veterans General Hospital, Taichung 40705, Taiwan
- School of Medicine, National Defense Medical Center, Taipei 11490, Taiwan
| | - Jacky Chung-Hao Wu
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu 300093, Taiwan;
| | - Yen-Ming Kuan
- Institute of Multimedia Engineering, National Yang Ming Chiao Tung University, Hsinchu 300093, Taiwan;
| | - Yi-Chun Liu
- Department of Post-Baccalaureate Medicine, College of Medicine, National Chung Hsing University, Taichung 402202, Taiwan;
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung 40705, Taiwan
| | - Pi-Yi Chang
- Department of Radiology, Taichung Veterans General Hospital, Taichung 40705, Taiwan;
| | - Jun-Peng Chen
- Biostatistics Task Force, Taichung Veterans General Hospital, Taichung 40705, Taiwan;
| | - Henry Horng-Shing Lu
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu 300093, Taiwan;
- Department of Statistics and Data Science, Cornell University, Ithaca, NY 14853, USA
| | - Oscar Kuang-Sheng Lee
- Institute of Clinical Medicine, National Yang Ming Chiao Tung University, Taipei 11221, Taiwan;
- Stem Cell Research Center, National Yang Ming Chiao Tung University, Taipei 11221, Taiwan
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112201, Taiwan
- Department of Orthopedics, China Medical University Hospital, Taichung 40402, Taiwan
- Center for Translational Genomics & Regenerative Medicine Research, China Medical University Hospital, Taichung 40402, Taiwan
| |
Collapse
|
3
|
Yengec-Tasdemir SB, Aydin Z, Akay E, Dogan S, Yilmaz B. An effective colorectal polyp classification for histopathological images based on supervised contrastive learning. Comput Biol Med 2024; 172:108267. [PMID: 38479197 DOI: 10.1016/j.compbiomed.2024.108267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 03/06/2024] [Accepted: 03/06/2024] [Indexed: 03/26/2024]
Abstract
Early detection of colon adenomatous polyps is pivotal in reducing colon cancer risk. In this context, accurately distinguishing between adenomatous polyp subtypes, especially tubular and tubulovillous, from hyperplastic variants is crucial. This study introduces a cutting-edge computer-aided diagnosis system optimized for this task. Our system employs advanced Supervised Contrastive learning to ensure precise classification of colon histopathology images. Significantly, we have integrated the Big Transfer model, which has gained prominence for its exemplary adaptability to visual tasks in medical imaging. Our novel approach discerns between in-class and out-of-class images, thereby elevating its discriminatory power for polyp subtypes. We validated our system using two datasets: a specially curated one and the publicly accessible UniToPatho dataset. The results reveal that our model markedly surpasses traditional deep convolutional neural networks, registering classification accuracies of 87.1% and 70.3% for the custom and UniToPatho datasets, respectively. Such results emphasize the transformative potential of our model in polyp classification endeavors.
Collapse
Affiliation(s)
- Sena Busra Yengec-Tasdemir
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast, BT39DT, United Kingdom.
| | - Zafer Aydin
- Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey; Department of Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey
| | - Ebru Akay
- Pathology Clinic, Kayseri City Hospital, Kayseri, 38080, Turkey
| | - Serkan Dogan
- Gastroenterology Clinic, Kayseri City Hospital, Kayseri, 38080, Turkey
| | - Bulent Yilmaz
- Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey; Department of Electrical Engineering, Gulf University for Science and Technology, Mishref, 40005, Kuwait
| |
Collapse
|
4
|
Gabralla LA, Hussien AM, AlMohimeed A, Saleh H, Alsekait DM, El-Sappagh S, Ali AA, Refaat Hassan M. Automated Diagnosis for Colon Cancer Diseases Using Stacking Transformer Models and Explainable Artificial Intelligence. Diagnostics (Basel) 2023; 13:2939. [PMID: 37761306 PMCID: PMC10529133 DOI: 10.3390/diagnostics13182939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/23/2023] [Accepted: 08/31/2023] [Indexed: 09/29/2023] Open
Abstract
Colon cancer is the third most common cancer type worldwide in 2020, almost two million cases were diagnosed. As a result, providing new, highly accurate techniques in detecting colon cancer leads to early and successful treatment of this disease. This paper aims to propose a heterogenic stacking deep learning model to predict colon cancer. Stacking deep learning is integrated with pretrained convolutional neural network (CNN) models with a metalearner to enhance colon cancer prediction performance. The proposed model is compared with VGG16, InceptionV3, Resnet50, and DenseNet121 using different evaluation metrics. Furthermore, the proposed models are evaluated using the LC25000 and WCE binary and muticlassified colon cancer image datasets. The results show that the stacking models recorded the highest performance for the two datasets. For the LC25000 dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (100). For the WCE colon image dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (98). Stacking-SVM achieved the highest performed compared to existing models (VGG16, InceptionV3, Resnet50, and DenseNet121) because it combines the output of multiple single models and trains and evaluates a metalearner using the output to produce better predictive results than any single model. Black-box deep learning models are represented using explainable AI (XAI).
Collapse
Affiliation(s)
- Lubna Abdelkareim Gabralla
- Department of Computer Science and Information Technology, Applied College, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Ali Mohamed Hussien
- Department of Computer Science, Faculty of Science, Aswan University, Aswan 81528, Egypt
| | - Abdulaziz AlMohimeed
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 13318, Saudi Arabia
| | - Hager Saleh
- Faculty of Computers and Artificial Intelligence, South Valley University, Hurghada 84511, Egypt
| | - Deema Mohammed Alsekait
- Department of Computer Science and Information Technology, Applied College, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Shaker El-Sappagh
- Faculty of Computer Science and Engineering, Galala University, Suez 34511, Egypt
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Banha 13518, Egypt
| | - Abdelmgeid A. Ali
- Faculty of Computers and Information, Minia University, Minia 61519, Egypt
| | - Moatamad Refaat Hassan
- Department of Computer Science, Faculty of Science, Aswan University, Aswan 81528, Egypt
| |
Collapse
|
5
|
Lo CM, Yang YW, Lin JK, Lin TC, Chen WS, Yang SH, Chang SC, Wang HS, Lan YT, Lin HH, Huang SC, Cheng HH, Jiang JK, Lin CC. Modeling the survival of colorectal cancer patients based on colonoscopic features in a feature ensemble vision transformer. Comput Med Imaging Graph 2023; 107:102242. [PMID: 37172354 DOI: 10.1016/j.compmedimag.2023.102242] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 05/05/2023] [Accepted: 05/07/2023] [Indexed: 05/14/2023]
Abstract
The prognosis of patients with colorectal cancer (CRC) mostly relies on the classic tumor node metastasis (TNM) staging classification. A more accurate and convenient prediction model would provide a better prognosis and assist in treatment. From May 2014 to December 2017, patients who underwent an operation for CRC were enrolled. The proposed feature ensemble vision transformer (FEViT) used ensemble classifiers to benefit the combinations of relevant colonoscopy features from the pretrained vision transformer and clinical features, including sex, age, family history of CRC, and tumor location, to establish the prognostic model. A total of 1729 colonoscopy images were enrolled in the current retrospective study. For the prediction of patient survival, FEViT achieved an accuracy of 94 % with an area under the receiver operating characteristic curve of 0.93, which was better than the TNM staging classification (90 %, 0.83) in the experiment. FEViT reduced the limited receptive field and gradient disappearance in the conventional convolutional neural network and was a relatively effective and efficient procedure. The promising accuracy of FEViT in modeling survival makes the prognosis of CRC patients more predictable and practical.
Collapse
Affiliation(s)
- Chung-Ming Lo
- Graduate Institute of Library, Information and Archival Studies, National Chengchi University, Taipei, Taiwan
| | - Yi-Wen Yang
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Jen-Kou Lin
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Tzu-Chen Lin
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wei-Shone Chen
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Shung-Haur Yang
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Surgery, National Yang Ming Chiao Tung University Hospital, Yilan, Taiwan
| | - Shih-Ching Chang
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Huann-Sheng Wang
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yuan-Tzu Lan
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Hung-Hsin Lin
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Sheng-Chieh Huang
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Hou-Hsuan Cheng
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Jeng-Kai Jiang
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chun-Chi Lin
- Division of Colon and Rectal Surgery, Department of Surgery, Taipei Veterans General Hospital, Taipei, Taiwan; Department of Surgery, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan.
| |
Collapse
|
6
|
Tang S, Yu X, Cheang CF, Ji X, Yu HH, Choi IC. CLELNet: A continual learning network for esophageal lesion analysis on endoscopic images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107399. [PMID: 36780717 DOI: 10.1016/j.cmpb.2023.107399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 01/03/2023] [Accepted: 02/01/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE A deep learning-based intelligent diagnosis system can significantly reduce the burden of endoscopists in the daily analysis of esophageal lesions. Considering the need to add new tasks in the diagnosis system, a deep learning model that can train a series of tasks incrementally using endoscopic images is essential for identifying the types and regions of esophageal lesions. METHOD In this paper, we proposed a continual learning-based esophageal lesion network (CLELNet), in which a convolutional autoencoder was designed to extract representation features of endoscopic images among different esophageal lesions. The proposed CLELNet consists of shared layers and task-specific layers. Shared layers are used to extract common features among different lesions while task-specific layers can complete different tasks. The first two tasks trained by the CLELNet are the classification (task 1) and the segmentation (task 2). We collected a dataset of esophageal endoscopic images from Macau Kiang Wu Hospital for training and testing the CLELNet. RESULTS The experimental results showed that the classification accuracy of task 1 was 95.96%, and the Intersection Over Union and the Dice Similarity Coefficient of task 2 were 65.66% and 78.08%, respectively. CONCLUSIONS The proposed CLELNet can realize task-incremental learning without forgetting the previous tasks and thus become a useful computer-aided diagnosis system in esophageal lesions analysis.
Collapse
Affiliation(s)
- Suigu Tang
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Avenida Wai Long, Taipa, Macau SAR
| | - Xiaoyuan Yu
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Avenida Wai Long, Taipa, Macau SAR
| | - Chak Fong Cheang
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Avenida Wai Long, Taipa, Macau SAR.
| | - Xiaoyu Ji
- Faculty of Innovation Engineering-School of Computer Science and Engineering, Macau University of Science and Technology, Avenida Wai Long, Taipa, Macau SAR
| | - Hon Ho Yu
- Kiang Wu Hospital, Rua de Coelho do Amaral, Macau SAR
| | - I Cheong Choi
- Kiang Wu Hospital, Rua de Coelho do Amaral, Macau SAR
| |
Collapse
|
7
|
Yengec-Tasdemir SB, Aydin Z, Akay E, Dogan S, Yilmaz B. Improved classification of colorectal polyps on histopathological images with ensemble learning and stain normalization. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 232:107441. [PMID: 36905748 DOI: 10.1016/j.cmpb.2023.107441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Revised: 02/05/2023] [Accepted: 02/21/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Early detection of colon adenomatous polyps is critically important because correct detection of it significantly reduces the potential of developing colon cancers in the future. The key challenge in the detection of adenomatous polyps is differentiating it from its visually similar counterpart, non-adenomatous tissues. Currently, it solely depends on the experience of the pathologist. To assist the pathologists, the objective of this work is to provide a novel non-knowledge-based Clinical Decision Support System (CDSS) for improved detection of adenomatous polyps on colon histopathology images. METHODS The domain shift problem arises when the train and test data are coming from different distributions of diverse settings and unequal color levels. This problem, which can be tackled by stain normalization techniques, restricts the machine learning models to attain higher classification accuracies. In this work, the proposed method integrates stain normalization techniques with ensemble of competitively accurate, scalable and robust variants of CNNs, ConvNexts. The improvement is empirically analyzed for five widely employed stain normalization techniques. The classification performance of the proposed method is evaluated on three datasets comprising more than 10k colon histopathology images. RESULTS The comprehensive experiments demonstrate that the proposed method outperforms the state-of-the-art deep convolutional neural network based models by attaining 95% classification accuracy on the curated dataset, and 91.1% and 90% on EBHI and UniToPatho public datasets, respectively. CONCLUSIONS These results show that the proposed method can accurately classify colon adenomatous polyps on histopathology images. It retains remarkable performance scores even for different datasets coming from different distributions. This indicates that the model has a notable generalization ability.
Collapse
Affiliation(s)
- Sena Busra Yengec-Tasdemir
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast, BT39DT, United Kingdom; Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey.
| | - Zafer Aydin
- Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey; Department of Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey
| | - Ebru Akay
- Pathology Clinic, Kayseri City Hospital, Kayseri, 38080, Turkey
| | - Serkan Dogan
- Gastroenterology Clinic, Kayseri City Hospital, Kayseri, 38080, Turkey
| | - Bulent Yilmaz
- Department of Electrical Engineering, Gulf University for Science and Technology, Mishref, 40005, Kuwait; Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey.
| |
Collapse
|
8
|
ELKarazle K, Raman V, Then P, Chua C. Detection of Colorectal Polyps from Colonoscopy Using Machine Learning: A Survey on Modern Techniques. SENSORS (BASEL, SWITZERLAND) 2023; 23:1225. [PMID: 36772263 PMCID: PMC9953705 DOI: 10.3390/s23031225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 01/08/2023] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
Given the increased interest in utilizing artificial intelligence as an assistive tool in the medical sector, colorectal polyp detection and classification using deep learning techniques has been an active area of research in recent years. The motivation for researching this topic is that physicians miss polyps from time to time due to fatigue and lack of experience carrying out the procedure. Unidentified polyps can cause further complications and ultimately lead to colorectal cancer (CRC), one of the leading causes of cancer mortality. Although various techniques have been presented recently, several key issues, such as the lack of enough training data, white light reflection, and blur affect the performance of such methods. This paper presents a survey on recently proposed methods for detecting polyps from colonoscopy. The survey covers benchmark dataset analysis, evaluation metrics, common challenges, standard methods of building polyp detectors and a review of the latest work in the literature. We conclude this paper by providing a precise analysis of the gaps and trends discovered in the reviewed literature for future work.
Collapse
Affiliation(s)
- Khaled ELKarazle
- School of Information and Communication Technologies, Swinburne University of Technology, Sarawak Campus, Kuching 93350, Malaysia
| | - Valliappan Raman
- Department of Artificial Intelligence and Data Science, Coimbatore Institute of Technology, Coimbatore 641014, India
| | - Patrick Then
- School of Information and Communication Technologies, Swinburne University of Technology, Sarawak Campus, Kuching 93350, Malaysia
| | - Caslon Chua
- Department of Computer Science and Software Engineering, Swinburne University of Technology, Melbourne 3122, Australia
| |
Collapse
|
9
|
Tharwat M, Sakr NA, El-Sappagh S, Soliman H, Kwak KS, Elmogy M. Colon Cancer Diagnosis Based on Machine Learning and Deep Learning: Modalities and Analysis Techniques. SENSORS (BASEL, SWITZERLAND) 2022; 22:9250. [PMID: 36501951 PMCID: PMC9739266 DOI: 10.3390/s22239250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 11/24/2022] [Indexed: 06/17/2023]
Abstract
The treatment and diagnosis of colon cancer are considered to be social and economic challenges due to the high mortality rates. Every year, around the world, almost half a million people contract cancer, including colon cancer. Determining the grade of colon cancer mainly depends on analyzing the gland's structure by tissue region, which has led to the existence of various tests for screening that can be utilized to investigate polyp images and colorectal cancer. This article presents a comprehensive survey on the diagnosis of colon cancer. This covers many aspects related to colon cancer, such as its symptoms and grades as well as the available imaging modalities (particularly, histopathology images used for analysis) in addition to common diagnosis systems. Furthermore, the most widely used datasets and performance evaluation metrics are discussed. We provide a comprehensive review of the current studies on colon cancer, classified into deep-learning (DL) and machine-learning (ML) techniques, and we identify their main strengths and limitations. These techniques provide extensive support for identifying the early stages of cancer that lead to early treatment of the disease and produce a lower mortality rate compared with the rate produced after symptoms develop. In addition, these methods can help to prevent colorectal cancer from progressing through the removal of pre-malignant polyps, which can be achieved using screening tests to make the disease easier to diagnose. Finally, the existing challenges and future research directions that open the way for future work in this field are presented.
Collapse
Affiliation(s)
- Mai Tharwat
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Nehal A. Sakr
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Shaker El-Sappagh
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Benha 13512, Egypt
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
| | - Hassan Soliman
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| | - Kyung-Sup Kwak
- Department of Information and Communication Engineering, Inha University, Incheon 22212, Republic of Korea
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| |
Collapse
|
10
|
Karaman A, Karaboga D, Pacal I, Akay B, Basturk A, Nalbantoglu U, Coskun S, Sahin O. Hyper-parameter optimization of deep learning architectures using artificial bee colony (ABC) algorithm for high performance real-time automatic colorectal cancer (CRC) polyp detection. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04299-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
11
|
Narasimha Raju AS, Jayavel K, Rajalakshmi T. ColoRectalCADx: Expeditious Recognition of Colorectal Cancer with Integrated Convolutional Neural Networks and Visual Explanations Using Mixed Dataset Evidence. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8723957. [PMID: 36404909 PMCID: PMC9671728 DOI: 10.1155/2022/8723957] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 10/27/2022] [Indexed: 12/07/2023]
Abstract
Colorectal cancer typically affects the gastrointestinal tract within the human body. Colonoscopy is one of the most accurate methods of detecting cancer. The current system facilitates the identification of cancer by computer-assisted diagnosis (CADx) systems with a limited number of deep learning methods. It does not imply the depiction of mixed datasets for the functioning of the system. The proposed system, called ColoRectalCADx, is supported by deep learning (DL) models suitable for cancer research. The CADx system comprises five stages: convolutional neural networks (CNN), support vector machine (SVM), long short-term memory (LSTM), visual explanation such as gradient-weighted class activation mapping (Grad-CAM), and semantic segmentation phases. Here, the key components of the CADx system are equipped with 9 individual and 12 integrated CNNs, implying that the system consists mainly of investigational experiments with a total of 21 CNNs. In the subsequent phase, the CADx has a combination of CNNs of concatenated transfer learning functions associated with the machine SVM classification. Additional classification is applied to ensure effective transfer of results from CNN to LSTM. The system is mainly made up of a combination of CVC Clinic DB, Kvasir2, and Hyper Kvasir input as a mixed dataset. After CNN and LSTM, in advanced stage, malignancies are detected by using a better polyp recognition technique with Grad-CAM and semantic segmentation using U-Net. CADx results have been stored on Google Cloud for record retention. In these experiments, among all the CNNs, the individual CNN DenseNet-201 (87.1% training and 84.7% testing accuracies) and the integrated CNN ADaDR-22 (84.61% training and 82.17% testing accuracies) were the most efficient for cancer detection with the CNN+LSTM model. ColoRectalCADx accurately identifies cancer through individual CNN DesnseNet-201 and integrated CNN ADaDR-22. In Grad-CAM's visual explanations, CNN DenseNet-201 displays precise visualization of polyps, and CNN U-Net provides precise malignant polyps.
Collapse
Affiliation(s)
- Akella S. Narasimha Raju
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, 603203 Chennai, India
| | - Kayalvizhi Jayavel
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, 603203 Chennai, India
| | - T. Rajalakshmi
- Department of Electronics and Communication Engineering, School of Electrical and Electronics Engineering, SRM Institute of Science and Technology, Kattankulathur, 603203 Chennai, India
| |
Collapse
|
12
|
Zhou JX, Yang Z, Xi DH, Dai SJ, Feng ZQ, Li JY, Xu W, Wang H. Enhanced segmentation of gastrointestinal polyps from capsule endoscopy images with artifacts using ensemble learning. World J Gastroenterol 2022; 28:5931-5943. [PMID: 36405108 PMCID: PMC9669827 DOI: 10.3748/wjg.v28.i41.5931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 08/31/2022] [Accepted: 10/19/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Endoscopy artifacts are widespread in real capsule endoscopy (CE) images but not in high-quality standard datasets.
AIM To improve the segmentation performance of polyps from CE images with artifacts based on ensemble learning.
METHODS We collected 277 polyp images with CE artifacts from 5760 h of videos from 480 patients at Guangzhou First People’s Hospital from January 2016 to December 2019. Two public high-quality standard external datasets were retrieved and used for the comparison experiments. For each dataset, we randomly segmented the data into training, validation, and testing sets for model training, selection, and testing. We compared the performance of the base models and the ensemble model in segmenting polyps from images with artifacts.
RESULTS The performance of the semantic segmentation model was affected by artifacts in the sample images, which also affected the results of polyp detection by CE using a single model. The evaluation based on real datasets with artifacts and standard datasets showed that the ensemble model of all state-of-the-art models performed better than the best corresponding base learner on the real dataset with artifacts. Compared with the corresponding optimal base learners, the intersection over union (IoU) and dice of the ensemble learning model increased to different degrees, ranging from 0.08% to 7.01% and 0.61% to 4.93%, respectively. Moreover, in the standard datasets without artifacts, most of the ensemble models were slightly better than the base learner, as demonstrated by the IoU and dice increases ranging from -0.28% to 1.20% and -0.61% to 0.76%, respectively.
CONCLUSION Ensemble learning can improve the segmentation accuracy of polyps from CE images with artifacts. Our results demonstrated an improvement in the detection rate of polyps with interference from artifacts.
Collapse
Affiliation(s)
- Jun-Xiao Zhou
- Department of Gastroenterology and Hepatology, Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong Province, China
| | - Zhan Yang
- School of Information, Renmin University of China, Beijing 100872, China
| | - Ding-Hao Xi
- School of Information, Renmin University of China, Beijing 100872, China
| | - Shou-Jun Dai
- Department of Gastroenterology and Hepatology, Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong Province, China
| | - Zhi-Qiang Feng
- Department of Gastroenterology and Hepatology, Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong Province, China
| | - Jun-Yan Li
- Department of Gastroenterology and Hepatology, Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong Province, China
| | - Wei Xu
- School of Information, Renmin University of China, Beijing 100872, China
| | - Hong Wang
- Department of Gastroenterology and Hepatology, Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong Province, China
| |
Collapse
|
13
|
Feng B, Xu C, An Z. AI recognition preprocessing algorithm for polyp based on illumination equalization and highlight restoration. INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS 2022. [DOI: 10.1007/s41060-022-00353-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
14
|
Narasimha Raju AS, Jayavel K, Rajalakshmi T. Dexterous Identification of Carcinoma through ColoRectalCADx with Dichotomous Fusion CNN and UNet Semantic Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4325412. [PMID: 36262620 PMCID: PMC9576362 DOI: 10.1155/2022/4325412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 08/16/2022] [Accepted: 08/20/2022] [Indexed: 11/18/2022]
Abstract
Human colorectal disorders in the digestive tract are recognized by reference colonoscopy. The current system recognizes cancer through a three-stage system that utilizes two sets of colonoscopy data. However, identifying polyps by visualization has not been addressed. The proposed system is a five-stage system called ColoRectalCADx, which provides three publicly accessible datasets as input data for cancer detection. The three main datasets are CVC Clinic DB, Kvasir2, and Hyper Kvasir. After the image preprocessing stages, system experiments were performed with the seven prominent convolutional neural networks (CNNs) (end-to-end) and nine fusion CNN models to extract the spatial features. Afterwards, the end-to-end CNN and fusion features are executed. These features are derived from Discrete Wavelet Transform (DWT) and Vector Support Machine (SVM) classification, that was used to retrieve time and spatial frequency features. Experimentally, the results were obtained for five stages. For each of the three datasets, from stage 1 to stage 3, end-to-end CNN, DenseNet-201 obtained the best testing accuracy (98%, 87%, 84%), ((98%, 97%), (87%, 87%), (84%, 84%)), ((99.03%, 99%), (88.45%, 88%), (83.61%, 84%)). For each of the three datasets, from stage 2, CNN DaRD-22 fusion obtained the optimal test accuracy ((93%, 97%) (82%, 84%), (69%, 57%)). And for stage 4, ADaRDEV2-22 fusion achieved the best test accuracy ((95.73%, 94%), (81.20%, 81%), (72.56%, 58%)). For the input image segmentation datasets CVC Clinc-Seg, KvasirSeg, and Hyper Kvasir, malignant polyps were identified with the UNet CNN model. Here, the loss score datasets (CVC clinic DB was 0.7842, Kvasir2 was 0.6977, and Hyper Kvasir was 0.6910) were obtained.
Collapse
Affiliation(s)
- Akella S. Narasimha Raju
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| | - Kayalvizhi Jayavel
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| | - Thulasi Rajalakshmi
- Department of Electronics and Communication Engineering, School of Electrical and Electronics Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| |
Collapse
|
15
|
Adjei PE, Lonseko ZM, Du W, Zhang H, Rao N. Examining the effect of synthetic data augmentation in polyp detection and segmentation. Int J Comput Assist Radiol Surg 2022; 17:1289-1302. [PMID: 35678960 DOI: 10.1007/s11548-022-02651-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 04/21/2022] [Indexed: 12/17/2022]
Abstract
PURPOSE As with several medical image analysis tasks based on deep learning, gastrointestinal image analysis is plagued with data scarcity, privacy concerns and an insufficient number of pathology samples. This study examines the generation and utility of synthetic samples of colonoscopy images with polyps for data augmentation. METHODS We modify and train a pix2pix model to generate synthetic colonoscopy samples with polyps to augment the original dataset. Subsequently, we create a variety of datasets by varying the quantity of synthetic samples and traditional augmentation samples, to train a U-Net network and Faster R-CNN model for segmentation and detection of polyps, respectively. We compare the performance of the models when trained with the resulting datasets in terms of F1 score, intersection over union, precision and recall. Further, we compare the performances of the models with unseen polyp datasets to assess their generalization ability. RESULTS The average F1 coefficient and intersection over union are improved with increasing number of synthetic samples in U-Net over all test datasets. The performance of the Faster R-CNN model is also improved in terms of polyp detection, while decreasing the false-negative rate. Further, the experimental results for polyp detection outperform similar studies in the literature on the ETIS-PolypLaribDB dataset. CONCLUSION By varying the quantity of synthetic and traditional augmentation, there is the potential to control the sensitivity of deep learning models in polyp segmentation and detection. Further, GAN-based augmentation is a viable option for improving the performance of models for polyp segmentation and detection.
Collapse
Affiliation(s)
- Prince Ebenezer Adjei
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.,Department of Computer Engineering, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
| | - Zenebe Markos Lonseko
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Wenju Du
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Han Zhang
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Nini Rao
- Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu, 610054, China. .,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| |
Collapse
|
16
|
Huang D, Liu J, Zhou S, Tang W. Deep unsupervised endoscopic image enhancement based on multi-image fusion. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106800. [PMID: 35533420 DOI: 10.1016/j.cmpb.2022.106800] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Revised: 02/27/2022] [Accepted: 03/30/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE A deep unsupervised endoscopic image enhancement method is proposed based on multi-image fusion to achieve high quality endoscope images from poorly illuminated, low contrast and color deviated images through an unsupervised mapping and deep learning network without the need for ground truth. METHODS Firstly, three image enhancement methods are used to process original endoscopic images to obtain three derived images, which are then transformed into HSI color space. Secondly, a deep unsupervised multi-image fusion network (DerivedFuse) is proposed to extract and fuse features of the derived images accurately by utilizing a new no-reference quality metric as loss function. I-channel images of the three derived images are inputted into the DerivedFuse network to enhance the intensity component of the original image. Finally, a saturation adjustment function is proposed to adaptive adjusting the saturation component of HSI color space to enrich the color information of the original input image. RESULTS Three evaluation metrics: Entropy, Contrast Improvement Index (CII) and Average Gradient (AG) are used to evaluate the performance of the proposed method. The results are compared with that of fourteen state-of-the-art algorithms. Experiments on endoscopic image enhancement show that the Entropy value of our method is 3.27% higher than the optimal entropy value of comparison algorithms. The CII of our proposed method is 6.19% higher than that of comparison algorithms. The AG of our method is 7.83% higher than the optimal AG of comparison algorithms. CONCLUSIONS The proposed deep unsupervised multi-image fusion method can obtain image information details, enhance endoscopic images with high contrast, rich and natural color information, visual and image quality. Sixteen doctors and medical students have given their assessments on the proposed method for assisting clinical diagnoses.
Collapse
Affiliation(s)
- Dongjin Huang
- Shanghai Film Academy, Shanghai University, Room 304, No.2 Teaching Building, 149 Yanchang Road, Shanghai 200072, China.
| | - Jinhua Liu
- Shanghai Film Academy, Shanghai University, Room 304, No.2 Teaching Building, 149 Yanchang Road, Shanghai 200072, China
| | - Shuhua Zhou
- Shanghai Film Academy, Shanghai University, Room 304, No.2 Teaching Building, 149 Yanchang Road, Shanghai 200072, China
| | - Wen Tang
- The Faculty of Science, Design and Technology, University of Bournemouth, Poole, Dorset, UK
| |
Collapse
|
17
|
A deep ensemble learning method for colorectal polyp classification with optimized network parameters. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03689-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractColorectal Cancer (CRC), a leading cause of cancer-related deaths, can be abated by timely polypectomy. Computer-aided classification of polyps helps endoscopists to resect timely without submitting the sample for histology. Deep learning-based algorithms are promoted for computer-aided colorectal polyp classification. However, the existing methods do not accommodate any information on hyperparametric settings essential for model optimisation. Furthermore, unlike the polyp types, i.e., hyperplastic and adenomatous, the third type, serrated adenoma, is difficult to classify due to its hybrid nature. Moreover, automated assessment of polyps is a challenging task due to the similarities in their patterns; therefore, the strength of individual weak learners is combined to form a weighted ensemble model for an accurate classification model by establishing the optimised hyperparameters. In contrast to existing studies on binary classification, multiclass classification require evaluation through advanced measures. This study compared six existing Convolutional Neural Networks in addition to transfer learning and opted for optimum performing architecture only for ensemble models. The performance evaluation on UCI and PICCOLO dataset of the proposed method in terms of accuracy (96.3%, 81.2%), precision (95.5%, 82.4%), recall (97.2%, 81.1%), F1-score (96.3%, 81.3%) and model reliability using Cohen’s Kappa Coefficient (0.94, 0.62) shows the superiority over existing models. The outcomes of experiments by other studies on the same dataset yielded 82.5% accuracy with 72.7% recall by SVM and 85.9% accuracy with 87.6% recall by other deep learning methods. The proposed method demonstrates that a weighted ensemble of optimised networks along with data augmentation significantly boosts the performance of deep learning-based CAD.
Collapse
|
18
|
Sharma P, Balabantaray BK, Bora K, Mallik S, Kasugai K, Zhao Z. An Ensemble-Based Deep Convolutional Neural Network for Computer-Aided Polyps Identification From Colonoscopy. Front Genet 2022; 13:844391. [PMID: 35559018 PMCID: PMC9086187 DOI: 10.3389/fgene.2022.844391] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 03/14/2022] [Indexed: 01/16/2023] Open
Abstract
Colorectal cancer (CRC) is the third leading cause of cancer death globally. Early detection and removal of precancerous polyps can significantly reduce the chance of CRC patient death. Currently, the polyp detection rate mainly depends on the skill and expertise of gastroenterologists. Over time, unidentified polyps can develop into cancer. Machine learning has recently emerged as a powerful method in assisting clinical diagnosis. Several classification models have been proposed to identify polyps, but their performance has not been comparable to an expert endoscopist yet. Here, we propose a multiple classifier consultation strategy to create an effective and powerful classifier for polyp identification. This strategy benefits from recent findings that different classification models can better learn and extract various information within the image. Therefore, our Ensemble classifier can derive a more consequential decision than each individual classifier. The extracted combined information inherits the ResNet's advantage of residual connection, while it also extracts objects when covered by occlusions through depth-wise separable convolution layer of the Xception model. Here, we applied our strategy to still frames extracted from a colonoscopy video. It outperformed other state-of-the-art techniques with a performance measure greater than 95% in each of the algorithm parameters. Our method will help researchers and gastroenterologists develop clinically applicable, computational-guided tools for colonoscopy screening. It may be extended to other clinical diagnoses that rely on image.
Collapse
Affiliation(s)
- Pallabi Sharma
- Department of Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, India
| | - Bunil Kumar Balabantaray
- Department of Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, India
| | - Kangkana Bora
- Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Saurav Mallik
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Kunio Kasugai
- Department of Gastroenterology, Aichi Medical University, Nagakute, Japan
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Human Genetics Center, School of Public Health, The University of Texas Health Science Center at Houston, Houston, TX, United States
- MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, TX, United States
| |
Collapse
|
19
|
Song D, Zhang Z, Li W, Yuan L, Zhang W. Judgment of benign and early malignant colorectal tumors from ultrasound images with deep multi-View fusion. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106634. [PMID: 35081497 DOI: 10.1016/j.cmpb.2022.106634] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Revised: 11/28/2021] [Accepted: 01/11/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Colorectal cancer (CRC) is currently one of the main cancers world-wide, with a high incidence in the elderly. In the diagnosis of CRC, endorectal ultrasound plays an important role in judging benign and early malignant tumors. However, malignant tumors in the early-stage are not easy to identify visually and experts usually seek help from multi-view images, which increases the workload and also exists a certain probability of misdiagnosis. In recent years, with the widespread use of deep learning methods in the analysis of medical images, it becomes necessary to design an effective computer-aided diagnosis (CAD) system of CRC based on multi-view endorectal ultrasound images. METHOD In this study, we proposed a CAD system for judging benign and early malignant colorectal tumors, and constructed the first multi-view ultrasound image dataset of CRC to validate our algorithm. Our system is an end-to-end model based on a deep neural network (DNN) which includes a feature extraction module based on dense blocks, a multi-view fusion module, and a Multi-Layer Perception-based classifier. A center loss was used for the first time in CAD tasks, to optimize our model. RESULT On the constructed dataset, the proposed system surpasses expert diagnosis in accuracy, sensitivity, specificity, and F1-score. Compared with the popular deep classification networks and other CAD methods, the algorithm has reached the best performance. Comparative experiments using different feature extraction methods, different view fusion strategies, and different classifiers verify the effectiveness of each part of the algorithm. CONCLUSION We propose a CAD system for judging benign and early malignant colorectal tumors based on DNN, which combines information of ultrasound images from different views for comprehension. On the first CRC multi-view ultrasound image dataset which we constructed, our method outperforms expert diagnosis results and all other methods, and the effectiveness of each part of the system has been verified. Our system has application value in future medical practice on early diagnosis of CRC.
Collapse
Affiliation(s)
- Dan Song
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| | - Zheqi Zhang
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
| | - Wenhui Li
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China.
| | - Lijun Yuan
- Department of Colorectal Surgery, Tianjin Union Medical Center, Tianjin 300121, China; Tianjin Institute of Coloproctology, Tianjin 300121, China.
| | - Wenshu Zhang
- EUREKA Robotics Centre, School of Technologies, Cardiff Metropolitan University, Cardiff, Wales, United Kingdom
| |
Collapse
|
20
|
Polyp Detection from Colorectum Images by Using Attentive YOLOv5. Diagnostics (Basel) 2021; 11:diagnostics11122264. [PMID: 34943501 PMCID: PMC8700704 DOI: 10.3390/diagnostics11122264] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Revised: 11/26/2021] [Accepted: 11/30/2021] [Indexed: 01/05/2023] Open
Abstract
Background: High-quality colonoscopy is essential to prevent the occurrence of colorectal cancers. The data of colonoscopy are mainly stored in the form of images. Therefore, artificial intelligence-assisted colonoscopy based on medical images is not only a research hotspot, but also one of the effective auxiliary means to improve the detection rate of adenomas. This research has become the focus of medical institutions and scientific research departments and has important clinical and scientific research value. Methods: In this paper, we propose a YOLOv5 model based on a self-attention mechanism for polyp target detection. This method uses the idea of regression, using the entire image as the input of the network and directly returning the target frame of this position in multiple positions of the image. In the feature extraction process, an attention mechanism is added to enhance the contribution of information-rich feature channels and weaken the interference of useless channels; Results: The experimental results show that the method can accurately identify polyp images, especially for the small polyps and the polyps with inconspicuous contrasts, and the detection speed is greatly improved compared with the comparison algorithm. Conclusions: This study will be of great help in reducing the missed diagnosis of clinicians during endoscopy and treatment, and it is also of great significance to the development of clinicians’ clinical work.
Collapse
|
21
|
Automatic Polyp Segmentation in Colonoscopy Images Using a Modified Deep Convolutional Encoder-Decoder Architecture. SENSORS 2021; 21:s21165630. [PMID: 34451072 PMCID: PMC8402594 DOI: 10.3390/s21165630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 08/07/2021] [Accepted: 08/19/2021] [Indexed: 11/25/2022]
Abstract
Colorectal cancer has become the third most commonly diagnosed form of cancer, and has the second highest fatality rate of cancers worldwide. Currently, optical colonoscopy is the preferred tool of choice for the diagnosis of polyps and to avert colorectal cancer. Colon screening is time-consuming and highly operator dependent. In view of this, a computer-aided diagnosis (CAD) method needs to be developed for the automatic segmentation of polyps in colonoscopy images. This paper proposes a modified SegNet Visual Geometry Group-19 (VGG-19), a form of convolutional neural network, as a CAD method for polyp segmentation. The modifications include skip connections, 5 × 5 convolutional filters, and the concatenation of four dilated convolutions applied in parallel form. The CVC-ClinicDB, CVC-ColonDB, and ETIS-LaribPolypDB databases were used to evaluate the model, and it was found that our proposed polyp segmentation model achieved an accuracy, sensitivity, specificity, precision, mean intersection over union, and dice coefficient of 96.06%, 94.55%, 97.56%, 97.48%, 92.3%, and 95.99%, respectively. These results indicate that our model performs as well as or better than previous schemes in the literature. We believe that this study will offer benefits in terms of the future development of CAD tools for polyp segmentation for colorectal cancer diagnosis and management. In the future, we intend to embed our proposed network into a medical capsule robot for practical usage and try it in a hospital setting with clinicians.
Collapse
|