1
|
Li H, Liu D, Zeng Y, Liu S, Gan T, Rao N, Yang J, Zeng B. Single-Image-Based Deep Learning for Segmentation of Early Esophageal Cancer Lesions. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:2676-2688. [PMID: 38530733 DOI: 10.1109/tip.2024.3379902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/28/2024]
Abstract
Accurate segmentation of lesions is crucial for diagnosis and treatment of early esophageal cancer (EEC). However, neither traditional nor deep learning-based methods up to today can meet the clinical requirements, with the mean Dice score - the most important metric in medical image analysis - hardly exceeding 0.75. In this paper, we present a novel deep learning approach for segmenting EEC lesions. Our method stands out for its uniqueness, as it relies solely on a single input image from a patient, forming the so-called "You-Only-Have-One" (YOHO) framework. On one hand, this "one-image-one-network" learning ensures complete patient privacy as it does not use any images from other patients as the training data. On the other hand, it avoids nearly all generalization-related problems since each trained network is applied only to the same input image itself. In particular, we can push the training to "over-fitting" as much as possible to increase the segmentation accuracy. Our technical details include an interaction with clinical doctors to utilize their expertise, a geometry-based data augmentation over a single lesion image to generate the training dataset (the biggest novelty), and an edge-enhanced UNet. We have evaluated YOHO over an EEC dataset collected by ourselves and achieved a mean Dice score of 0.888, which is much higher as compared to the existing deep-learning methods, thus representing a significant advance toward clinical applications. The code and dataset are available at: https://github.com/lhaippp/YOHO.
Collapse
|
2
|
Ayana G, Barki H, Choe SW. Pathological Insights: Enhanced Vision Transformers for the Early Detection of Colorectal Cancer. Cancers (Basel) 2024; 16:1441. [PMID: 38611117 PMCID: PMC11010958 DOI: 10.3390/cancers16071441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 04/02/2024] [Accepted: 04/04/2024] [Indexed: 04/14/2024] Open
Abstract
Endoscopic pathological findings of the gastrointestinal tract are crucial for the early diagnosis of colorectal cancer (CRC). Previous deep learning works, aimed at improving CRC detection performance and reducing subjective analysis errors, are limited to polyp segmentation. Pathological findings were not considered and only convolutional neural networks (CNNs), which are not able to handle global image feature information, were utilized. This work introduces a novel vision transformer (ViT)-based approach for early CRC detection. The core components of the proposed approach are ViTCol, a boosted vision transformer for classifying endoscopic pathological findings, and PUTS, a vision transformer-based model for polyp segmentation. Results demonstrate the superiority of this vision transformer-based CRC detection method over existing CNN and vision transformer models. ViTCol exhibited an outstanding performance in classifying pathological findings, with an area under the receiver operating curve (AUC) value of 0.9999 ± 0.001 on the Kvasir dataset. PUTS provided outstanding results in segmenting polyp images, with mean intersection over union (mIoU) of 0.8673 and 0.9092 on the Kvasir-SEG and CVC-Clinic datasets, respectively. This work underscores the value of spatial transformers in localizing input images, which can seamlessly integrate into the main vision transformer network, enhancing the automated identification of critical image features for early CRC detection.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea;
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Hika Barki
- Department of Artificial Intelligence Convergence, Pukyong National University, Busan 48513, Republic of Korea;
| | - Se-woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea;
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- Emerging Pathogens Institute, University of Florida, Gainesville, FL 32608, USA
| |
Collapse
|
3
|
Guo H, Somayajula SA, Hosseini R, Xie P. Improving image classification of gastrointestinal endoscopy using curriculum self-supervised learning. Sci Rep 2024; 14:6100. [PMID: 38480815 PMCID: PMC10937990 DOI: 10.1038/s41598-024-53955-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 02/07/2024] [Indexed: 03/17/2024] Open
Abstract
Endoscopy, a widely used medical procedure for examining the gastrointestinal (GI) tract to detect potential disorders, poses challenges in manual diagnosis due to non-specific symptoms and difficulties in accessing affected areas. While supervised machine learning models have proven effective in assisting clinical diagnosis of GI disorders, the scarcity of image-label pairs created by medical experts limits their availability. To address these limitations, we propose a curriculum self-supervised learning framework inspired by human curriculum learning. Our approach leverages the HyperKvasir dataset, which comprises 100k unlabeled GI images for pre-training and 10k labeled GI images for fine-tuning. By adopting our proposed method, we achieved an impressive top-1 accuracy of 88.92% and an F1 score of 73.39%. This represents a 2.1% increase over vanilla SimSiam for the top-1 accuracy and a 1.9% increase for the F1 score. The combination of self-supervised learning and a curriculum-based approach demonstrates the efficacy of our framework in advancing the diagnosis of GI disorders. Our study highlights the potential of curriculum self-supervised learning in utilizing unlabeled GI tract images to improve the diagnosis of GI disorders, paving the way for more accurate and efficient diagnosis in GI endoscopy.
Collapse
Affiliation(s)
- Han Guo
- Department of Electrical and Computer Engineering, University of California, San Diego, San Diego, 92093, USA
| | - Sai Ashish Somayajula
- Department of Electrical and Computer Engineering, University of California, San Diego, San Diego, 92093, USA
| | - Ramtin Hosseini
- Department of Electrical and Computer Engineering, University of California, San Diego, San Diego, 92093, USA
| | - Pengtao Xie
- Department of Electrical and Computer Engineering, University of California, San Diego, San Diego, 92093, USA.
| |
Collapse
|
4
|
Sierra-Jerez F, Martinez F. A non-aligned translation with a neoplastic classifier regularization to include vascular NBI patterns in standard colonoscopies. Comput Biol Med 2024; 170:108008. [PMID: 38277922 DOI: 10.1016/j.compbiomed.2024.108008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 12/21/2023] [Accepted: 01/13/2024] [Indexed: 01/28/2024]
Abstract
Polyp vascular patterns are key to categorizing colorectal cancer malignancy. These patterns are typically observed in situ from specialized narrow-band images (NBI). Nonetheless, such vascular characterization is lost from standard colonoscopies (the primary attention mechanism). Besides, even for NBI observations, the categorization remains biased for expert observations, reporting errors in classification from 59.5% to 84.2%. This work introduces an end-to-end computational strategy to enhance in situ standard colonoscopy observations, including vascular patterns typically observed from NBI mechanisms. These retrieved synthetic images are achieved by adjusting a deep representation under a non-aligned translation task from optical colonoscopy (OC) to NBI. The introduced scheme includes an architecture to discriminate enhanced neoplastic patterns achieving a remarkable separation into the embedding representation. The proposed approach was validated in a public dataset with a total of 76 sequences, including standard optical sequences and the respective NBI observations. The enhanced optical sequences were automatically classified among adenomas and hyperplastic samples achieving an F1-score of 0.86%. To measure the sensibility capability of the proposed approach, serrated samples were projected to the trained architecture. In this experiment, statistical differences from three classes with a ρ-value <0.05 were reported, following a Mann-Whitney U test. This work showed remarkable polyp discrimination results in enhancing OC sequences regarding typical NBI patterns. This method also learns polyp class distributions under the unpaired criteria (close to real practice), with the capability to separate serrated samples from adenomas and hyperplastic ones.
Collapse
Affiliation(s)
- Franklin Sierra-Jerez
- Biomedical Imaging, Vision and Learning Laboratory (BIVL(2)ab), Universidad Industrial de Santander (UIS), Colombia
| | - Fabio Martinez
- Biomedical Imaging, Vision and Learning Laboratory (BIVL(2)ab), Universidad Industrial de Santander (UIS), Colombia.
| |
Collapse
|
5
|
Li J, Jiang P, An Q, Wang GG, Kong HF. Medical image identification methods: A review. Comput Biol Med 2024; 169:107777. [PMID: 38104516 DOI: 10.1016/j.compbiomed.2023.107777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/30/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
The identification of medical images is an essential task in computer-aided diagnosis, medical image retrieval and mining. Medical image data mainly include electronic health record data and gene information data, etc. Although intelligent imaging provided a good scheme for medical image analysis over traditional methods that rely on the handcrafted features, it remains challenging due to the diversity of imaging modalities and clinical pathologies. Many medical image identification methods provide a good scheme for medical image analysis. The concepts pertinent of methods, such as the machine learning, deep learning, convolutional neural networks, transfer learning, and other image processing technologies for medical image are analyzed and summarized in this paper. We reviewed these recent studies to provide a comprehensive overview of applying these methods in various medical image analysis tasks, such as object detection, image classification, image registration, segmentation, and other tasks. Especially, we emphasized the latest progress and contributions of different methods in medical image analysis, which are summarized base on different application scenarios, including classification, segmentation, detection, and image registration. In addition, the applications of different methods are summarized in different application area, such as pulmonary, brain, digital pathology, brain, skin, lung, renal, breast, neuromyelitis, vertebrae, and musculoskeletal, etc. Critical discussion of open challenges and directions for future research are finally summarized. Especially, excellent algorithms in computer vision, natural language processing, and unmanned driving will be applied to medical image recognition in the future.
Collapse
Affiliation(s)
- Juan Li
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China; School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Pan Jiang
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China
| | - Qing An
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China
| | - Gai-Ge Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, 266100, China.
| | - Hua-Feng Kong
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China.
| |
Collapse
|
6
|
Li C, Liu J, Tang J. Simultaneous segmentation and classification of colon cancer polyp images using a dual branch multi-task learning network. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:2024-2049. [PMID: 38454673 DOI: 10.3934/mbe.2024090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2024]
Abstract
Accurate classification and segmentation of polyps are two important tasks in the diagnosis and treatment of colorectal cancers. Existing models perform segmentation and classification separately and do not fully make use of the correlation between the two tasks. Furthermore, polyps exhibit random regions and varying shapes and sizes, and they often share similar boundaries and backgrounds. However, existing models fail to consider these factors and thus are not robust because of their inherent limitations. To address these issues, we developed a multi-task network that performs both segmentation and classification simultaneously and can cope with the aforementioned factors effectively. Our proposed network possesses a dual-branch structure, comprising a transformer branch and a convolutional neural network (CNN) branch. This approach enhances local details within the global representation, improving both local feature awareness and global contextual understanding, thus contributing to the improved preservation of polyp-related information. Additionally, we have designed a feature interaction module (FIM) aimed at bridging the semantic gap between the two branches and facilitating the integration of diverse semantic information from both branches. This integration enables the full capture of global context information and local details related to polyps. To prevent the loss of edge detail information crucial for polyp identification, we have introduced a reverse attention boundary enhancement (RABE) module to gradually enhance edge structures and detailed information within polyp regions. Finally, we conducted extensive experiments on five publicly available datasets to evaluate the performance of our method in both polyp segmentation and classification tasks. The experimental results confirm that our proposed method outperforms other state-of-the-art methods.
Collapse
Affiliation(s)
- Chenqian Li
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430065, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan 430065, China
| | - Jun Liu
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430065, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan 430065, China
| | - Jinshan Tang
- Department of Health Administration and Policy, College of Public Health, George Mason University, Fairfax, VA 22030, USA
| |
Collapse
|
7
|
Khan Z, Tahir MA. Real time anatomical landmarks and abnormalities detection in gastrointestinal tract. PeerJ Comput Sci 2023; 9:e1685. [PMID: 38192480 PMCID: PMC10773696 DOI: 10.7717/peerj-cs.1685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 10/16/2023] [Indexed: 01/10/2024]
Abstract
Gastrointestinal (GI) endoscopy is an active research field due to the lethal cancer diseases in the GI tract. Cancer treatments result better if diagnosed early and it increases the survival chances. There is a high miss rate in the detection of the abnormalities in the GI tract during endoscopy or colonoscopy due to the lack of attentiveness, tiring procedures, or the lack of required training. The procedure of the detection can be automated to the reduction of the risks by identifying and flagging the suspicious frames. A suspicious frame may have some of the abnormality or the information about anatomical landmark in the frame. The frame then can be analysed for the anatomical landmarks and the abnormalities for the detection of disease. In this research, a real-time endoscopic abnormalities detection system is presented that detects the abnormalities and the landmarks. The proposed system is based on a combination of handcrafted and deep features. Deep features are extracted from lightweight MobileNet convolutional neural network (CNN) architecture. There are some of the classes with a small inter-class difference and a higher intra-class differences, for such classes the same detection threshold is unable to distinguish. The threshold of such classes is learned from the training data using genetic algorithm. The system is evaluated on various benchmark datasets and resulted in an accuracy of 0.99 with the F1-score of 0.91 and Matthews correlation coefficient (MCC) of 0.91 on Kvasir datasets and F1-score of 0.93 on the dataset of DowPK. The system detects abnormalities in real-time with the detection speed of 41 frames per second.
Collapse
Affiliation(s)
- Zeshan Khan
- FAST School of Computing, National University of Computer and Emerging Sciences, Islamabad, Karachi, Sindh, Pakistan
| | - Muhammad Atif Tahir
- FAST School of Computing, National University of Computer and Emerging Sciences, Islamabad, Karachi, Sindh, Pakistan
| |
Collapse
|
8
|
Jiang L, Huang S, Luo C, Zhang J, Chen W, Liu Z. An improved multi-scale gradient generative adversarial network for enhancing classification of colorectal cancer histological images. Front Oncol 2023; 13:1240645. [PMID: 38023227 PMCID: PMC10679330 DOI: 10.3389/fonc.2023.1240645] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction Deep learning-based solutions for histological image classification have gained attention in recent years due to their potential for objective evaluation of histological images. However, these methods often require a large number of expert annotations, which are both time-consuming and labor-intensive to obtain. Several scholars have proposed generative models to augment labeled data, but these often result in label uncertainty due to incomplete learning of the data distribution. Methods To alleviate these issues, a method called InceptionV3-SMSG-GAN has been proposed to enhance classification performance by generating high-quality images. Specifically, images synthesized by Multi-Scale Gradients Generative Adversarial Network (MSG-GAN) are selectively added to the training set through a selection mechanism utilizing a trained model to choose generated images with higher class probabilities. The selection mechanism filters the synthetic images that contain ambiguous category information, thus alleviating label uncertainty. Results Experimental results show that compared with the baseline method which uses InceptionV3, the proposed method can significantly improve the performance of pathological image classification from 86.87% to 89.54% for overall accuracy. Additionally, the quality of generated images is evaluated quantitatively using various commonly used evaluation metrics. Discussion The proposed InceptionV3-SMSG-GAN method exhibited good classification ability, where histological image could be divided into nine categories. Future work could focus on further refining the image generation and selection processes to optimize classification performance.
Collapse
Affiliation(s)
- Liwen Jiang
- Department of Pathology, Affiliated Cancer Hospital and Institution of Guangzhou Medical University, Guangzhou, China
| | - Shuting Huang
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Chaofan Luo
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Jiangyu Zhang
- Department of Pathology, Affiliated Cancer Hospital and Institution of Guangzhou Medical University, Guangzhou, China
| | - Wenjing Chen
- Department of Pathology, Guangdong Women and Children Hospital, Guangzhou, China
| | - Zhenyu Liu
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| |
Collapse
|
9
|
Nguyen TP, Kim JH, Kim SH, Yoon J, Choi SH. Machine Learning-Based Measurement of Regional and Global Spinal Parameters Using the Concept of Incidence Angle of Inflection Points. Bioengineering (Basel) 2023; 10:1236. [PMID: 37892966 PMCID: PMC10604057 DOI: 10.3390/bioengineering10101236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 10/05/2023] [Accepted: 10/18/2023] [Indexed: 10/29/2023] Open
Abstract
This study delves into the application of convolutional neural networks (CNNs) in evaluating spinal sagittal alignment, introducing the innovative concept of incidence angles of inflection points (IAIPs) as intuitive parameters to capture the interplay between pelvic and spinal alignment. Pioneering the fusion of IAIPs with machine learning for sagittal alignment analysis, this research scrutinized whole-spine lateral radiographs from hundreds of patients who visited a single institution, utilizing high-quality images for parameter assessments. Noteworthy findings revealed robust success rates for certain parameters, including pelvic and C2 incidence angles, but comparatively lower rates for sacral slope and L1 incidence. The proposed CNN-based machine learning method demonstrated remarkable efficiency, achieving an impressive 80 percent detection rate for various spinal angles, such as lumbar lordosis and thoracic kyphosis, with a precise error threshold of 3.5°. Further bolstering the study's credibility, measurements derived from the novel formula closely aligned with those directly extracted from the CNN model. In conclusion, this research underscores the utility of the CNN-based deep learning algorithm in delivering precise measurements of spinal sagittal parameters, and highlights the potential for integrating machine learning with the IAIP concept for comprehensive data accumulation in the domain of sagittal spinal alignment analysis, thus advancing our understanding of spinal health.
Collapse
Affiliation(s)
- Thong Phi Nguyen
- Department of Mechanical Engineering, BK21 FOUR ERICA-ACE Center, Hanyang University, 55 Hanyangdaehak-ro, Sangnok-gu, Ansan-si 15588, Gyeonggi-do, Republic of Korea
- Department of Mechanical Engineering, Hanyang University, 55 Hanyangdaehak-ro, Sangnok-gu, Ansan-si 15588, Gyeonggi-do, Republic of Korea
| | - Ji-Hwan Kim
- Department of Orthopedic Surgery, Hanyang University College of Medicine, 222 Wangsimni-ro, Seongdong-gu, Seoul 04763, Republic of Korea
| | - Seong-Ha Kim
- Department of Orthopedic Surgery, Hanyang University College of Medicine, 222 Wangsimni-ro, Seongdong-gu, Seoul 04763, Republic of Korea
| | - Jonghun Yoon
- Department of Mechanical Engineering, BK21 FOUR ERICA-ACE Center, Hanyang University, 55 Hanyangdaehak-ro, Sangnok-gu, Ansan-si 15588, Gyeonggi-do, Republic of Korea
- Department of Mechanical Engineering, Hanyang University, 55 Hanyangdaehak-ro, Sangnok-gu, Ansan-si 15588, Gyeonggi-do, Republic of Korea
- AIDICOME Inc., 221, 5th Engineering Building, 55 Hanyangdaehak-ro, Sangnok-gu, Ansan-si 15588, Gyeonggi-do, Republic of Korea
| | - Sung-Hoon Choi
- Department of Orthopedic Surgery, Hanyang University College of Medicine, 222 Wangsimni-ro, Seongdong-gu, Seoul 04763, Republic of Korea
| |
Collapse
|
10
|
Wang K, Zhuang S, Miao J, Chen Y, Hua J, Zhou GQ, He X, Li S. Adaptive Frequency Learning Network With Anti-Aliasing Complex Convolutions for Colon Diseases Subtypes. IEEE J Biomed Health Inform 2023; 27:4816-4827. [PMID: 37796719 DOI: 10.1109/jbhi.2023.3300288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/07/2023]
Abstract
The automatic and dependable identification of colonic disease subtypes by colonoscopy is crucial. Once successful, it will facilitate clinically more in-depth disease staging analysis and the formulation of more tailored treatment plans. However, inter-class confusion and brightness imbalance are major obstacles to colon disease subtyping. Notably, the Fourier-based image spectrum, with its distinctive frequency features and brightness insensitivity, offers a potential solution. To effectively leverage its advantages to address the existing challenges, this article proposes a framework capable of thorough learning in the frequency domain based on four core designs: the position consistency module, the high-frequency self-supervised module, the complex number arithmetic model, and the feature anti-aliasing module. The position consistency module enables the generation of spectra that preserve local and positional information while compressing the spectral data range to improve training stability. Through band masking and supervision, the high-frequency autoencoder module guides the network to learn useful frequency features selectively. The proposed complex number arithmetic model allows direct spectral training while avoiding the loss of phase information caused by current general-purpose real-valued operations. The feature anti-aliasing module embeds filters in the model to prevent spectral aliasing caused by down-sampling and improve performance. Experiments are performed on the collected five-class dataset, which contains 4591 colorectal endoscopic images. The outcomes show that our proposed method produces state-of-the-art results with an accuracy rate of 89.82%.
Collapse
|
11
|
Du RC, Ouyang YB, Hu Y. Research trends on artificial intelligence and endoscopy in digestive diseases: A bibliometric analysis from 1990 to 2022. World J Gastroenterol 2023; 29:3561-3573. [PMID: 37389238 PMCID: PMC10303508 DOI: 10.3748/wjg.v29.i22.3561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 04/03/2023] [Accepted: 05/04/2023] [Indexed: 06/06/2023] Open
Abstract
BACKGROUND Recently, artificial intelligence (AI) has been widely used in gastrointestinal endoscopy examinations.
AIM To comprehensively evaluate the application of AI-assisted endoscopy in detecting different digestive diseases using bibliometric analysis.
METHODS Relevant publications from the Web of Science published from 1990 to 2022 were extracted using a combination of the search terms “AI” and “endoscopy”. The following information was recorded from the included publications: Title, author, institution, country, endoscopy type, disease type, performance of AI, publication, citation, journal and H-index.
RESULTS A total of 446 studies were included. The number of articles reached its peak in 2021, and the annual citation numbers increased after 2006. China, the United States and Japan were dominant countries in this field, accounting for 28.7%, 16.8%, and 15.7% of publications, respectively. The Tada Tomohiro Institute of Gastroenterology and Proctology was the most influential institution. “Cancer” and “polyps” were the hotspots in this field. Colorectal polyps were the most concerning and researched disease, followed by gastric cancer and gastrointestinal bleeding. Conventional endoscopy was the most common type of examination. The accuracy of AI in detecting Barrett’s esophagus, colorectal polyps and gastric cancer from 2018 to 2022 is 87.6%, 93.7% and 88.3%, respectively. The detection rates of adenoma and gastrointestinal bleeding from 2018 to 2022 are 31.3% and 96.2%, respectively.
CONCLUSION AI could improve the detection rate of digestive tract diseases and a convolutional neural network-based diagnosis program for endoscopic images shows promising results.
Collapse
Affiliation(s)
- Ren-Chun Du
- Department of Gastroenterology, The First Affiliated Hospital of Nanchang University, Nanchang 330006, Jiangxi Province, China
| | - Yao-Bin Ouyang
- Department of Gastroenterology, The First Affiliated Hospital of Nanchang University, Nanchang 330006, Jiangxi Province, China
- Department of Oncology, Mayo Clinic, Rochester, MN 55905, United States
| | - Yi Hu
- Department of Gastroenterology, The First Affiliated Hospital of Nanchang University, Nanchang 330006, Jiangxi Province, China
- Department of Surgery, The Chinese University of Hong Kong, Hong Kong 999077, China
| |
Collapse
|
12
|
Wang KN, Zhuang S, Ran QY, Zhou P, Hua J, Zhou GQ, He X. DLGNet: A dual-branch lesion-aware network with the supervised Gaussian Mixture model for colon lesions classification in colonoscopy images. Med Image Anal 2023; 87:102832. [PMID: 37148864 DOI: 10.1016/j.media.2023.102832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Revised: 01/20/2023] [Accepted: 04/20/2023] [Indexed: 05/08/2023]
Abstract
Colorectal cancer is one of the malignant tumors with the highest mortality due to the lack of obvious early symptoms. It is usually in the advanced stage when it is discovered. Thus the automatic and accurate classification of early colon lesions is of great significance for clinically estimating the status of colon lesions and formulating appropriate diagnostic programs. However, it is challenging to classify full-stage colon lesions due to the large inter-class similarities and intra-class differences of the images. In this work, we propose a novel dual-branch lesion-aware neural network (DLGNet) to classify intestinal lesions by exploring the intrinsic relationship between diseases, composed of four modules: lesion location module, dual-branch classification module, attention guidance module, and inter-class Gaussian loss function. Specifically, the elaborate dual-branch module integrates the original image and the lesion patch obtained by the lesion localization module to explore and interact with lesion-specific features from a global and local perspective. Also, the feature-guided module guides the model to pay attention to the disease-specific features by learning remote dependencies through spatial and channel attention after network feature learning. Finally, the inter-class Gaussian loss function is proposed, which assumes that each feature extracted by the network is an independent Gaussian distribution, and the inter-class clustering is more compact, thereby improving the discriminative ability of the network. The extensive experiments on the collected 2568 colonoscopy images have an average accuracy of 91.50%, and the proposed method surpasses the state-of-the-art methods. This study is the first time that colon lesions are classified at each stage and achieves promising colon disease classification performance. To motivate the community, we have made our code publicly available via https://github.com/soleilssss/DLGNet.
Collapse
Affiliation(s)
- Kai-Ni Wang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Shuaishuai Zhuang
- The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Qi-Yong Ran
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Ping Zhou
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Jie Hua
- The First Affiliated Hospital of Nanjing Medical University, Nanjing, China; Liyang People's Hospital, Liyang Branch Hospital of Jiangsu Province Hospital, Liyang, China
| | - Guang-Quan Zhou
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China.
| | - Xiaopu He
- The First Affiliated Hospital of Nanjing Medical University, Nanjing, China.
| |
Collapse
|
13
|
Krenzer A, Heil S, Fitting D, Matti S, Zoller WG, Hann A, Puppe F. Automated classification of polyps using deep learning architectures and few-shot learning. BMC Med Imaging 2023; 23:59. [PMID: 37081495 PMCID: PMC10120204 DOI: 10.1186/s12880-023-01007-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 03/24/2023] [Indexed: 04/22/2023] Open
Abstract
BACKGROUND Colorectal cancer is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is a colonoscopy. However, not all colon polyps have the risk of becoming cancerous. Therefore, polyps are classified using different classification systems. After the classification, further treatment and procedures are based on the classification of the polyp. Nevertheless, classification is not easy. Therefore, we suggest two novel automated classifications system assisting gastroenterologists in classifying polyps based on the NICE and Paris classification. METHODS We build two classification systems. One is classifying polyps based on their shape (Paris). The other classifies polyps based on their texture and surface patterns (NICE). A two-step process for the Paris classification is introduced: First, detecting and cropping the polyp on the image, and secondly, classifying the polyp based on the cropped area with a transformer network. For the NICE classification, we design a few-shot learning algorithm based on the Deep Metric Learning approach. The algorithm creates an embedding space for polyps, which allows classification from a few examples to account for the data scarcity of NICE annotated images in our database. RESULTS For the Paris classification, we achieve an accuracy of 89.35 %, surpassing all papers in the literature and establishing a new state-of-the-art and baseline accuracy for other publications on a public data set. For the NICE classification, we achieve a competitive accuracy of 81.13 % and demonstrate thereby the viability of the few-shot learning paradigm in polyp classification in data-scarce environments. Additionally, we show different ablations of the algorithms. Finally, we further elaborate on the explainability of the system by showing heat maps of the neural network explaining neural activations. CONCLUSION Overall we introduce two polyp classification systems to assist gastroenterologists. We achieve state-of-the-art performance in the Paris classification and demonstrate the viability of the few-shot learning paradigm in the NICE classification, addressing the prevalent data scarcity issues faced in medical machine learning.
Collapse
Affiliation(s)
- Adrian Krenzer
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070, Würzburg, Germany.
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080, Würzburg, Germany.
| | - Stefan Heil
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070, Würzburg, Germany
| | - Daniel Fitting
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080, Würzburg, Germany
| | - Safa Matti
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070, Würzburg, Germany
| | - Wolfram G Zoller
- Department of Internal Medicine and Gastroenterology, Katharinenhospital, Kriegsbergstrasse 60, 70174, Stuttgart, Germany
| | - Alexander Hann
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080, Würzburg, Germany
| | - Frank Puppe
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070, Würzburg, Germany
| |
Collapse
|
14
|
Tavana P, Akraminia M, Koochari A, Bagherifard A. Classification of spinal curvature types using radiography images: deep learning versus classical methods. Artif Intell Rev 2023; 56:1-33. [PMID: 37362895 PMCID: PMC10088798 DOI: 10.1007/s10462-023-10480-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
Scoliosis is a spinal abnormality that has two types of curves (C-shaped or S-shaped). The vertebrae of the spine reach an equilibrium at different times, which makes it challenging to detect the type of curves. In addition, it may be challenging to detect curvatures due to observer bias and image quality. This paper aims to evaluate spinal deformity by automatically classifying the type of spine curvature. Automatic spinal curvature classification is performed using SVM and KNN algorithms, and pre-trained Xception and MobileNetV2 networks with SVM as the final activation function to avoid vanishing gradient. Different feature extraction methods should be used to investigate the SVM and KNN machine learning methods in detecting the curvature type. Features are extracted through the representation of radiographic images. These representations are of two groups: (i) Low-level image representation techniques such as texture features and (ii) local patch-based representations such as Bag of Words (BoW). Such features are utilized by various algorithms for classification by SVM and KNN. The feature extraction process is automated in pre-trained deep networks. In this study, 1000 anterior-posterior (AP) radiographic images of the spine were collected as a private dataset from Shafa Hospital, Tehran, Iran. The transfer learning was used due to the relatively small private dataset of anterior-posterior radiology images of the spine. Based on the results of these experiments, pre-trained deep networks were found to be approximately 10% more accurate than classical methods in classifying whether the spinal curvature is C-shaped or S-shaped. As a result of automatic feature extraction, it has been found that the pre-trained Xception and mobilenetV2 networks with SVM as the final activation function for controlling the vanishing gradient perform better than the classical machine learning methods of classification of spinal curvature types.
Collapse
Affiliation(s)
- Parisa Tavana
- Department of Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Mahdi Akraminia
- Mechanical Rotary Equipment Research Department, Niroo Research Institute, Tehran, Iran
| | - Abbas Koochari
- Department of Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Abolfazl Bagherifard
- Bone and Joint Reconstruction Research Center, Shafa Orthopedic Hospital, Iran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
15
|
Mazumdar S, Sinha S, Jha S, Jagtap B. Computer-aided automated diminutive colonic polyp detection in colonoscopy by using deep machine learning system; first indigenous algorithm developed in India. Indian J Gastroenterol 2023; 42:226-232. [PMID: 37145230 DOI: 10.1007/s12664-022-01331-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 12/18/2022] [Indexed: 05/06/2023]
Abstract
BACKGROUND Colonic polyps can be detected and resected during a colonoscopy before cancer development. However, about 1/4th of the polyps could be missed due to their small size, location or human errors. An artificial intelligence (AI) system can improve polyp detection and reduce colorectal cancer incidence. We are developing an indigenous AI system to detect diminutive polyps in real-life scenarios that can be compatible with any high-definition colonoscopy and endoscopic video- capture software. METHODS We trained a masked region-based convolutional neural network model to detect and localize colonic polyps. Three independent datasets of colonoscopy videos comprising 1,039 image frames were used and divided into a training dataset of 688 frames and a testing dataset of 351 frames. Of 1,039 image frames, 231 were from real-life colonoscopy videos from our centre. The rest were from publicly available image frames already modified to be directly utilizable for developing the AI system. The image frames of the testing dataset were also augmented by rotating and zooming the images to replicate real-life distortions of images seen during colonoscopy. The AI system was trained to localize the polyp by creating a 'bounding box'. It was then applied to the testing dataset to test its accuracy in detecting polyps automatically. RESULTS The AI system achieved a mean average precision (equivalent to specificity) of 88.63% for automatic polyp detection. All polyps in the testing were identified by AI, i.e., no false-negative result in the testing dataset (sensitivity of 100%). The mean polyp size in the study was 5 (± 4) mm. The mean processing time per image frame was 96.4 minutes. CONCLUSIONS This AI system, when applied to real-life colonoscopy images, having wide variations in bowel preparation and small polyp size, can detect colonic polyps with a high degree of accuracy.
Collapse
Affiliation(s)
- Srijan Mazumdar
- Indian Institute of Liver and Digestive Sciences, Sitala (East), Jagadishpur, Sonarpur, 24 Parganas (South), Kolkata, 700 150, India.
| | - Saugata Sinha
- Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440 010, India
| | - Saurabh Jha
- Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440 010, India
| | - Balaji Jagtap
- Visvesvaraya National Institute of Technology, South Ambazari Road, Nagpur, 440 010, India
| |
Collapse
|
16
|
Gong R, He S, Tian T, Chen J, Hao Y, Qiao C. FRCNN-AA-CIF: An automatic detection model of colon polyps based on attention awareness and context information fusion. Comput Biol Med 2023; 158:106787. [PMID: 37044051 DOI: 10.1016/j.compbiomed.2023.106787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 03/03/2023] [Accepted: 03/11/2023] [Indexed: 04/08/2023]
Abstract
It is noted that the foreground and background of the polyp images detected under colonoscopy are not highly differentiated, and the feature map extracted by common deep learning object detection models keep getting smaller as the number of networks increases. Therefore, these models tend to ignore the details in pictures, resulting in a high polyp missed detection rate. To reduce the missed detection rate, this paper proposes an automatic detection model of colon polyps based on attention awareness and context information fusion (FRCNN-AA-CIF) based on a two-stage object detection model Faster Region-Convolutional Neural Network (FR-CNN). First, since the addition of attention awareness can make the feature extraction network pay more attention to polyp features, we propose an attention awareness module based on Squeeze-and-Excitation Network (SENet) and Efficient Channel Attention Module (ECA-Net) and add it after each block of the backbone network. Specifically, we first use the 1*1 convolution of ECA-Net to extract local cross-channel information and then use the two fully connected layers of SENet to reduce and increase the dimension, to filter out the channels that are more useful for feature learning. Further, because of the presence of air bubbles, impurities, inflammation, and accumulation of digestive matter around polyps, we used context information around polyps to enhance the focus on polyp features. In particular, after the network extracts the region of interest, we fuse the region of interest with its context information to improve the detection rate of polyps. The proposed model was tested on the colonoscopy dataset provided by Huashan Hospital. Numerical experiments show that FRCNN-AA-CIF has the highest detection accuracy (mAP of 0.817), the lowest missed detection rate of 4.22%, and the best classification effect (AUC of 95.98%). Its mAP increased by 3.3%, MDR decreased by 1.97%, and AUC increased by 1.8%. Compared with other object detection models, FRCNN-AA-CIF has significantly improved recognition accuracy and reduced missed detection rate.
Collapse
|
17
|
Pan L, He T, Huang Z, Chen S, Zhang J, Zheng S, Chen X. Radiomics approach with deep learning for predicting T4 obstructive colorectal cancer using CT image. Abdom Radiol (NY) 2023; 48:1246-1259. [PMID: 36859730 DOI: 10.1007/s00261-023-03838-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 01/27/2023] [Accepted: 01/27/2023] [Indexed: 03/03/2023]
Abstract
OBJECTIVES Patients with T4 obstructive colorectal cancer (OCC) have a high mortality rate. Therefore, an accurate distinction between T4 and T1-T3 (NT4) in OCC is an important part of preoperative evaluation, especially in the emergency setting. This paper introduces three models of radiomics, deep learning, and deep learning-based radiomics to identify T4 OCC. METHODS We established a dataset of computed tomography (CT) images of 164 patients with pathologically confirmed OCC, from which 2537 slides were extracted. First, since T4 tumors penetrate the bowel wall and involve adjacent organs, we explored whether the peritumoral region contributes to the assessment of T4 OCC. Furthermore, we visualized the radiomics and deep learning features using the t-distributed stochastic neighbor embedding technique (t-SNE). Finally, we built a merged model by fusing radiomic features with deep learning features. In this experiment, the performance of each model was evaluated by the area under the receiver operating characteristic curve (AUC). RESULTS In the test cohort, the AUC values predicted by the radiomics model in the dilated region of interest (dROI) was 0.770. And the AUC value of the deep learning model with the patches extended 20-pixel reached 0.936. Combining the characteristics of radiomics and deep learning, our method achieved an AUC value of 0.947 in the T4 and non-T4 (NT4) classification, and increased the AUC value to 0.950 after the addition of clinical features. CONCLUSION The prediction results of our merged model of deep learning radiomics outperformed the deep learning model and significantly outperformed the radiomics model. The experimental results demonstrate that combining the peritumoral region improves the prediction performance of the radiomics model and the deep learning model.
Collapse
Affiliation(s)
- Lin Pan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, 350108, China
| | - Tian He
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, 350108, China
| | - Zihan Huang
- School of Future Technology, Harbin Institute of Technology, Harbin, 150000, China
| | - Shuai Chen
- Department of Emergency Surgery, Fujian Medical University Union Hospital, Fuzhou, 350001, China
| | - Junrong Zhang
- Department of Emergency Surgery, Fujian Medical University Union Hospital, Fuzhou, 350001, China
| | - Shaohua Zheng
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, 350108, China.
| | - Xianqiang Chen
- Department of Emergency Surgery, Fujian Medical University Union Hospital, Fuzhou, 350001, China.
| |
Collapse
|
18
|
Gan P, Li P, Xia H, Zhou X, Tang X. The application of artificial intelligence in improving colonoscopic adenoma detection rate: Where are we and where are we going. GASTROENTEROLOGIA Y HEPATOLOGIA 2023; 46:203-213. [PMID: 35489584 DOI: 10.1016/j.gastrohep.2022.03.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Revised: 03/08/2022] [Accepted: 03/18/2022] [Indexed: 02/08/2023]
Abstract
Colorectal cancer (CRC) is one of the common malignant tumors in the world. Colonoscopy is the crucial examination technique in CRC screening programs for the early detection of precursor lesions, and treatment of early colorectal cancer, which can reduce the morbidity and mortality of CRC significantly. However, pooled polyp miss rates during colonoscopic examination are as high as 22%. Artificial intelligence (AI) provides a promising way to improve the colonoscopic adenoma detection rate (ADR). It might assist endoscopists in avoiding missing polyps and offer an accurate optical diagnosis of suspected lesions. Herein, we described some of the milestone studies in using AI for colonoscopy, and the future application directions of AI in improving colonoscopic ADR.
Collapse
Affiliation(s)
- Peiling Gan
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Peiling Li
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Huifang Xia
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Xian Zhou
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Xiaowei Tang
- Department of Gastroenterology, Affiliated Hospital of Southwest Medical University, Luzhou, China; Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, China.
| |
Collapse
|
19
|
González-Bueno Puyal J, Brandao P, Ahmad OF, Bhatia KK, Toth D, Kader R, Lovat L, Mountney P, Stoyanov D. Spatio-temporal classification for polyp diagnosis. BIOMEDICAL OPTICS EXPRESS 2023; 14:593-607. [PMID: 36874484 PMCID: PMC9979670 DOI: 10.1364/boe.473446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 11/25/2022] [Accepted: 12/06/2022] [Indexed: 06/18/2023]
Abstract
Colonoscopy remains the gold standard investigation for colorectal cancer screening as it offers the opportunity to both detect and resect pre-cancerous polyps. Computer-aided polyp characterisation can determine which polyps need polypectomy and recent deep learning-based approaches have shown promising results as clinical decision support tools. Yet polyp appearance during a procedure can vary, making automatic predictions unstable. In this paper, we investigate the use of spatio-temporal information to improve the performance of lesions classification as adenoma or non-adenoma. Two methods are implemented showing an increase in performance and robustness during extensive experiments both on internal and openly available benchmark datasets.
Collapse
Affiliation(s)
- Juana González-Bueno Puyal
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
- Odin Vision, London W1W 7TY, UK
| | | | - Omer F. Ahmad
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | | | | | - Rawen Kader
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | - Laurence Lovat
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| | | | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional
and Surgical Sciences (WEISS), University College London, London
W1W 7TY, UK
| |
Collapse
|
20
|
Houwen BBSL, Nass KJ, Vleugels JLA, Fockens P, Hazewinkel Y, Dekker E. Comprehensive review of publicly available colonoscopic imaging databases for artificial intelligence research: availability, accessibility, and usability. Gastrointest Endosc 2023; 97:184-199.e16. [PMID: 36084720 DOI: 10.1016/j.gie.2022.08.043] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/24/2022] [Accepted: 08/30/2022] [Indexed: 01/28/2023]
Abstract
BACKGROUND AND AIMS Publicly available databases containing colonoscopic imaging data are valuable resources for artificial intelligence (AI) research. Currently, little is known regarding the available number and content of these databases. This review aimed to describe the availability, accessibility, and usability of publicly available colonoscopic imaging databases, focusing on polyp detection, polyp characterization, and quality of colonoscopy. METHODS A systematic literature search was performed in MEDLINE and Embase to identify AI studies describing publicly available colonoscopic imaging databases published after 2010. Second, a targeted search using Google's Dataset Search, Google Search, GitHub, and Figshare was done to identify databases directly. Databases were included if they contained data about polyp detection, polyp characterization, or quality of colonoscopy. To assess accessibility of databases, the following categories were defined: open access, open access with barriers, and regulated access. To assess the potential usability of the included databases, essential details of each database were extracted using a checklist derived from the Checklist for Artificial Intelligence in Medical Imaging. RESULTS We identified 22 databases with open access, 3 databases with open access with barriers, and 15 databases with regulated access. The 22 open access databases contained 19,463 images and 952 videos. Nineteen of these databases focused on polyp detection, localization, and/or segmentation; 6 on polyp characterization, and 3 on quality of colonoscopy. Only half of these databases have been used by other researcher to develop, train, or benchmark their AI system. Although technical details were in general well reported, important details such as polyp and patient demographics and the annotation process were under-reported in almost all databases. CONCLUSIONS This review provides greater insight on public availability of colonoscopic imaging databases for AI research. Incomplete reporting of important details limits the ability of researchers to assess the usability of current databases.
Collapse
Affiliation(s)
- Britt B S L Houwen
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Karlijn J Nass
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Jasper L A Vleugels
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Yark Hazewinkel
- Department of Gastroenterology and Hepatology, Radboud University Nijmegen Medical Center, Radboud University of Nijmegen, Nijmegen, the Netherlands
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
21
|
Colon cancer stage detection in colonoscopy images using YOLOv3 MSF deep learning architecture. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
22
|
Das HS, Das A, Neog A, Mallik S, Bora K, Zhao Z. Breast cancer detection: Shallow convolutional neural network against deep convolutional neural networks based approach. Front Genet 2023; 13:1097207. [PMID: 36685963 PMCID: PMC9846574 DOI: 10.3389/fgene.2022.1097207] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 12/15/2022] [Indexed: 01/06/2023] Open
Abstract
Introduction: Of all the cancers that afflict women, breast cancer (BC) has the second-highest mortality rate, and it is also believed to be the primary cause of the high death rate. Breast cancer is the most common cancer that affects women globally. There are two types of breast tumors: benign (less harmful and unlikely to become breast cancer) and malignant (which are very dangerous and might result in aberrant cells that could result in cancer). Methods: To find breast abnormalities like masses and micro-calcifications, competent and educated radiologists often examine mammographic images. This study focuses on computer-aided diagnosis to help radiologists make more precise diagnoses of breast cancer. This study aims to compare and examine the performance of the proposed shallow convolutional neural network architecture having different specifications against pre-trained deep convolutional neural network architectures trained on mammography images. Mammogram images are pre-processed in this study's initial attempt to carry out the automatic identification of BC. Thereafter, three different types of shallow convolutional neural networks with representational differences are then fed with the resulting data. In the second method, transfer learning via fine-tuning is used to feed the same collection of images into pre-trained convolutional neural networks VGG19, ResNet50, MobileNet-v2, Inception-v3, Xception, and Inception-ResNet-v2. Results: In our experiment with two datasets, the accuracy for the CBIS-DDSM and INbreast datasets are 80.4%, 89.2%, and 87.8%, 95.1% respectively. Discussion: It can be concluded from the experimental findings that the deep network-based approach with precise tuning outperforms all other state-of-the-art techniques in experiments on both datasets.
Collapse
Affiliation(s)
- Himanish Shekhar Das
- Department of Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Akalpita Das
- Department of Computer Science and Engineering, GIMT Guwahati, Guwahati, India
| | - Anupal Neog
- Department of AI and Machine Learning COE, IQVIA, Bengaluru, Karnataka, India
| | - Saurav Mallik
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, United States
- Department of Pharmacology and Toxicology, University of Arizona, Tucson, AZ, United States
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Pathology and Laboratory Medicine, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
23
|
Pan J, Li R, Liu H, Hu Y, Zheng W, Yan B, Yang Y, Xiao Y. Highlight removal for endoscopic images based on accelerated adaptive non-convex RPCA decomposition. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 228:107240. [PMID: 36417837 DOI: 10.1016/j.cmpb.2022.107240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Revised: 10/27/2022] [Accepted: 11/08/2022] [Indexed: 06/16/2023]
Abstract
OBJECTIVE Highlights always occur in endoscopic images due to their special imaging environment. It not only increases the difficulty of observation and diagnosis from surgeons, but also influences the performance of Mixed/Augmented Reality techniques in surgery navigation. METHODS In this paper, we propose a novel accelerated adaptive non-convex robust principal component analysis method (AANC-RPCA) to remove highlights in endoscopic images. We first detect absolute highlight pixels of the enhanced endoscopic images with adaptive threshold segmentation. The quasi-convex function is proposed to approximate a new non-convex objective function. With detected highlight pixels and quasi-convex function, it introduces gradient to shrink sparse matrix and obtains a faster speed of convergence. Then we divide the image into multiple blocks and perform the parallel computation to enhance the efficiency. Finally, we design a weighted template that decays outward with dilation and linear filtering to reconstruct the endoscopic images. Our approach is almost independent of hyper-parameters and can achieve adaptive decomposition. RESULTS It has been verified on multiple types of endoscopic images through experiments and clinical blind tests. The results demonstrate that our method can obtain the best performance for the recovered images with more details in a shorter time (about 3-5 times). CONCLUSION Coupled with the user study, both the quantitative and qualitative results indicate that our approach has the potential to be highly useful in endoscopy images. Compared with the existing highlight removal approaches, our method obtains the SOTA results and has the potential to be applied in the various medical processing processes.
Collapse
Affiliation(s)
- Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; PENG CHENG Laboratory, Shenzhen 518000, China.
| | - Ranyang Li
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; PENG CHENG Laboratory, Shenzhen 518000, China.
| | - Hongjun Liu
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China
| | - Yong Hu
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China
| | - Wenhao Zheng
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China
| | - Bin Yan
- The Department of Gastroenterology and Hepatology, Chinese PLA General Hospital, Beijing 100853, China
| | - Yunsheng Yang
- The Department of Gastroenterology and Hepatology, Chinese PLA General Hospital, Beijing 100853, China
| | - Yi Xiao
- The Department of General Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing 100730, China
| |
Collapse
|
24
|
Liu G, Zhao J, Tian G, Li S, Lu Y. Visualizing knowledge evolution trends and research hotspots of artificial intelligence in colorectal cancer: A bibliometric analysis. Front Oncol 2022; 12:925924. [PMID: 36518311 PMCID: PMC9742812 DOI: 10.3389/fonc.2022.925924] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 11/11/2022] [Indexed: 10/29/2023] Open
Abstract
BACKGROUND In recent years, the rapid development of artificial intelligence (AI) technology has created a new diagnostic and therapeutic opportunity for colorectal cancer (CRC). Numerous academic and clinical studies have demonstrated that high-level auxiliary diagnosis and treatment systems based on AI technology can significantly improve the readability of medical data, objectively provide a reliable and comprehensive reference for physicians, reduce the experience gap between physicians, and aid physicians in making more accurate diagnosis decisions. In this study, we used bibliometric techniques to visually analyze the literature about AI in the CRC field and summarize the current situation and research hotspots in this field. METHODS The relevant literature on AI in the field of CRC research was obtained from the Web of Science Core Collection (WoSCC) database. The software CiteSpace was utilized to analyze the number of papers, countries, institutions, authors, journals, cited literature, and keywords of the included literature and generate a visual knowledge map. The present study aims to evaluate the origin, current hotspots, and research trends of AI in CRC using bibliometric analysis. RESULTS As of March 2022, 64 nations/regions, 230 institutions, 245 journals, and 300 authors had published 562 AI-related articles in the field of CRC. Since 2016, each year has seen an exponential increase. China and the United States were the largest contributors, with the largest number of beneficial research institutions and the closest collaboration relationship. The World Journal of Gastroenterology is this field's most widely published journal. Diagnosis and treatment research, gene and immunology research, intestinal polyp research, tumor grading research, gastrointestinal endoscopy research, and prognosis research comprised the six topics derived from high-frequency keyword cluster analysis. CONCLUSION In recent years, field research has been a popular topic of discussion. The results of our bibliometric analysis allow us to comprehend better the current situation and trend of this research field, and the quantitative data indicators can serve as a guide for the research and application of global scholars.
Collapse
Affiliation(s)
- Guangwei Liu
- Department of Gastrointestinal Surgery, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
- Shandong Key Laboratory of Digital Medicine and Computer Assisted Surgery, Qingdao University, Qingdao, Shandong, China
| | - Jun Zhao
- Department of Pharmacy, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Guangye Tian
- School of Control Science and Engineering, Shandong University, Jinan, Shandong, China
| | - Shuai Li
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
| | - Yun Lu
- Department of Gastrointestinal Surgery, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
- Shandong Key Laboratory of Digital Medicine and Computer Assisted Surgery, Qingdao University, Qingdao, Shandong, China
| |
Collapse
|
25
|
Parkash O, Siddiqui ATS, Jiwani U, Rind F, Padhani ZA, Rizvi A, Hoodbhoy Z, Das JK. Diagnostic accuracy of artificial intelligence for detecting gastrointestinal luminal pathologies: A systematic review and meta-analysis. Front Med (Lausanne) 2022; 9:1018937. [PMID: 36405592 PMCID: PMC9672666 DOI: 10.3389/fmed.2022.1018937] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 10/03/2022] [Indexed: 11/06/2022] Open
Abstract
Background Artificial Intelligence (AI) holds considerable promise for diagnostics in the field of gastroenterology. This systematic review and meta-analysis aims to assess the diagnostic accuracy of AI models compared with the gold standard of experts and histopathology for the diagnosis of various gastrointestinal (GI) luminal pathologies including polyps, neoplasms, and inflammatory bowel disease. Methods We searched PubMed, CINAHL, Wiley Cochrane Library, and Web of Science electronic databases to identify studies assessing the diagnostic performance of AI models for GI luminal pathologies. We extracted binary diagnostic accuracy data and constructed contingency tables to derive the outcomes of interest: sensitivity and specificity. We performed a meta-analysis and hierarchical summary receiver operating characteristic curves (HSROC). The risk of bias was assessed using Quality Assessment for Diagnostic Accuracy Studies-2 (QUADAS-2) tool. Subgroup analyses were conducted based on the type of GI luminal disease, AI model, reference standard, and type of data used for analysis. This study is registered with PROSPERO (CRD42021288360). Findings We included 73 studies, of which 31 were externally validated and provided sufficient information for inclusion in the meta-analysis. The overall sensitivity of AI for detecting GI luminal pathologies was 91.9% (95% CI: 89.0–94.1) and specificity was 91.7% (95% CI: 87.4–94.7). Deep learning models (sensitivity: 89.8%, specificity: 91.9%) and ensemble methods (sensitivity: 95.4%, specificity: 90.9%) were the most commonly used models in the included studies. Majority of studies (n = 56, 76.7%) had a high risk of selection bias while 74% (n = 54) studies were low risk on reference standard and 67% (n = 49) were low risk for flow and timing bias. Interpretation The review suggests high sensitivity and specificity of AI models for the detection of GI luminal pathologies. There is a need for large, multi-center trials in both high income countries and low- and middle- income countries to assess the performance of these AI models in real clinical settings and its impact on diagnosis and prognosis. Systematic review registration [https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=288360], identifier [CRD42021288360].
Collapse
Affiliation(s)
- Om Parkash
- Department of Medicine, Aga Khan University, Karachi, Pakistan
| | | | - Uswa Jiwani
- Center of Excellence in Women and Child Health, Aga Khan University, Karachi, Pakistan
| | - Fahad Rind
- Head and Neck Oncology, The Ohio State University, Columbus, OH, United States
| | - Zahra Ali Padhani
- Institute for Global Health and Development, Aga Khan University, Karachi, Pakistan
| | - Arjumand Rizvi
- Center of Excellence in Women and Child Health, Aga Khan University, Karachi, Pakistan
| | - Zahra Hoodbhoy
- Department of Pediatrics and Child Health, Aga Khan University, Karachi, Pakistan
| | - Jai K. Das
- Institute for Global Health and Development, Aga Khan University, Karachi, Pakistan
- Department of Pediatrics and Child Health, Aga Khan University, Karachi, Pakistan
- *Correspondence: Jai K. Das,
| |
Collapse
|
26
|
Turan M, Durmus F. UC-NfNet: Deep learning-enabled assessment of ulcerative colitis from colonoscopy images. Med Image Anal 2022; 82:102587. [DOI: 10.1016/j.media.2022.102587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/12/2022] [Accepted: 08/17/2022] [Indexed: 10/31/2022]
|
27
|
Automatic detection of crohn disease in wireless capsule endoscopic images using a deep convolutional neural network. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04146-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractThe diagnosis of Crohn’s disease (CD) in the small bowel is generally performed by observing a very large number of images captured by capsule endoscopy (CE). This diagnostic technique entails a heavy workload for the specialists in terms of time spent reviewing the images. This paper presents a convolutional neural network capable of classifying the CE images to identify those ones affected by lesions indicative of the disease. The architecture of the proposed network was custom designed to solve this image classification problem. This allowed different design decisions to be made with the aim of improving its performance in terms of accuracy and processing speed compared to other state-of-the-art deep-learning-based reference architectures. The experimentation was carried out on a set of 15,972 images extracted from 31 CE videos of patients affected by CD, 7,986 of which showed lesions associated with the disease. The training, validation/selection and evaluation of the network was performed on 70%, 10% and 20% of the total images, respectively. The ROC curve obtained on the test image set has an area greater than 0.997, with points in a 95-99% sensitivity range associated with specificities of 99-96%. These figures are higher than those achieved by EfficientNet-B5, VGG-16, Xception or ResNet networks which also require an average processing time per image significantly higher than the one needed in the proposed architecture. Therefore, the network outlined in this paper is proving to be sufficiently promising to be considered for integration into tools used by specialists in their diagnosis of CD. In the sample of images analysed, the network was able to detect 99% of the images with lesions, filtering out for specialist review 96% of those with no signs of disease.
Collapse
|
28
|
Yu T, Lin N, Zhang X, Pan Y, Hu H, Zheng W, Liu J, Hu W, Duan H, Si J. An end-to-end tracking method for polyp detectors in colonoscopy videos. Artif Intell Med 2022; 131:102363. [DOI: 10.1016/j.artmed.2022.102363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 05/04/2022] [Accepted: 07/11/2022] [Indexed: 12/09/2022]
|
29
|
Ferreira H, Serranho P, Guimarães P, Trindade R, Martins J, Moreira PI, Ambrósio AF, Castelo-Branco M, Bernardes R. Stage-independent biomarkers for Alzheimer's disease from the living retina: an animal study. Sci Rep 2022; 12:13667. [PMID: 35953633 PMCID: PMC9372147 DOI: 10.1038/s41598-022-18113-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 08/05/2022] [Indexed: 12/02/2022] Open
Abstract
The early diagnosis of neurodegenerative disorders is still an open issue despite the many efforts to address this problem. In particular, Alzheimer’s disease (AD) remains undiagnosed for over a decade before the first symptoms. Optical coherence tomography (OCT) is now common and widely available and has been used to image the retina of AD patients and healthy controls to search for biomarkers of neurodegeneration. However, early diagnosis tools would need to rely on images of patients in early AD stages, which are not available due to late diagnosis. To shed light on how to overcome this obstacle, we resort to 57 wild-type mice and 57 triple-transgenic mouse model of AD to train a network with mice aged 3, 4, and 8 months and classify mice at the ages of 1, 2, and 12 months. To this end, we computed fundus images from OCT data and trained a convolution neural network (CNN) to classify those into the wild-type or transgenic group. CNN performance accuracy ranged from 80 to 88% for mice out of the training group’s age, raising the possibility of diagnosing AD before the first symptoms through the non-invasive imaging of the retina.
Collapse
Affiliation(s)
- Hugo Ferreira
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - Pedro Serranho
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Department of Sciences and Technology, Universidade Aberta, Rua da Escola Politécnica, n.º 147, 1269-001, Lisboa, Portugal
| | - Pedro Guimarães
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - Rita Trindade
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - João Martins
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Center for Innovative Biomedicine and Biotechnology (CIBB), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - Paula I Moreira
- Center for Innovative Biomedicine and Biotechnology (CIBB), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Laboratory of Physiology, Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Center for Neuroscience and Cell Biology (CNC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - António Francisco Ambrósio
- Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Center for Innovative Biomedicine and Biotechnology (CIBB), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - Miguel Castelo-Branco
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.,Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - Rui Bernardes
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal. .,Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.
| |
Collapse
|
30
|
Diabetic retinopathy screening using improved support vector domain description: a clinical study. Soft comput 2022. [DOI: 10.1007/s00500-022-07387-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
31
|
Kim Y, Park S, Kim H, Kim SS, Lim JS, Kim S, Choi K, Seo H. A Bounding-Box Regression Model for Colorectal Tumor Detection in CT Images Via Two Contrary Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3793-3796. [PMID: 36085607 DOI: 10.1109/embc48229.2022.9871285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The field of medical image analysis has been attracted to deep learning. Various deep learning-based techniques have been introduced to aid diagnosis in the CT image of the patient. The auxiliary model for diagnosis that we proposed is to detect colorectal tumors in the CT image. The model is combined with two contrary networks of 'Detection Transformer" and 'Hourglass". Furthermore., to improve the performance of the model., we propose an efficient connection method for two contrary models by using intermediate prediction information. A total of 3.,509 patients (193.,567 CT images) were applied to the experiment and our model outperforms the conventional models in colorectal tumor detection. Clinical Relevance - The proposed model in this paper automatically detects colorectal tumors and provides the bounding box in the CT images. Colorectal tumor is one of the common diseases. In addition, the mortality rate is so high that in-time treatment is required. The model we present here has a sensitivity (or recall) of 84.73 % for tumor detection and a precision of 88.25 % in the patient CT data. The in-slice performance of the tumor detection shows an IoU of 0.56, a sensitivity of 0.67, and a precision of 0.68.
Collapse
|
32
|
Hori K, Ikematsu H, Yamamoto Y, Matsuzaki H, Takeshita N, Shinmura K, Yoda Y, Kiuchi T, Takemoto S, Yokota H, Yano T. Detecting colon polyps in endoscopic images using artificial intelligence constructed with automated collection of annotated images from an endoscopy reporting system. Dig Endosc 2022; 34:1021-1029. [PMID: 34748658 DOI: 10.1111/den.14185] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 11/02/2021] [Accepted: 11/05/2021] [Indexed: 12/28/2022]
Abstract
BACKGROUND Artificial intelligence (AI) has made considerable progress in image recognition, especially in the analysis of endoscopic images. The availability of large-scale annotated datasets has contributed to the recent progress in this field. Datasets of high-quality annotated endoscopic images are widely available, particularly in Japan. A system for collecting annotated data reported daily could aid in accumulating a significant number of high-quality annotated datasets. AIM We assessed the validity of using daily annotated endoscopic images in a constructed reporting system for a prototype AI model for polyp detection. METHODS We constructed an automated collection system for daily annotated datasets from an endoscopy reporting system. The key images were selected and annotated for each case only during daily practice, not to be performed retrospectively. We automatically extracted annotated endoscopic images of diminutive colon polyps that had been diagnosed (study period March-September 2018) using the keywords of diagnostic information, and additionally collect the normal colon images. The collected dataset was devised into training and validation to build and evaluate the AI system. The detection model was developed using a deep learning algorithm, RetinaNet. RESULTS The automated system collected endoscopic images (47,391) from colonoscopies (745), and extracted key colon polyp images (1356) with localized annotations. The sensitivity, specificity, and accuracy of our AI model were 97.0%, 97.7%, and 97.3% (n = 300), respectively. CONCLUSION The automated system enabled the development of a high-performance colon polyp detector using images in endoscopy reporting system without the efforts of retrospective annotation works.
Collapse
Affiliation(s)
- Keisuke Hori
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Chiba, Japan.,Division of Science and Technology for Endoscopy, Exploratory Oncology Research & Clinical Trial Center, National Cancer Center Hospital East, Chiba, Japan
| | - Hiroaki Ikematsu
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Chiba, Japan.,Division of Science and Technology for Endoscopy, Exploratory Oncology Research & Clinical Trial Center, National Cancer Center Hospital East, Chiba, Japan
| | - Yoichi Yamamoto
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Chiba, Japan
| | - Hiroki Matsuzaki
- Medical Device Innovation Center, National Cancer Center Hospital East, Chiba, Japan
| | - Nobuyoshi Takeshita
- Medical Device Innovation Center, National Cancer Center Hospital East, Chiba, Japan
| | - Kensuke Shinmura
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Chiba, Japan
| | - Yusuke Yoda
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Chiba, Japan.,Medical Device Innovation Center, National Cancer Center Hospital East, Chiba, Japan
| | - Takayoshi Kiuchi
- System Engineering Division, FUJIFILM Medical IT Solutions Co., Ltd., Tokyo, Japan
| | - Satoko Takemoto
- Image Processing Research Team, RIKEN Center for Advanced Photonics, Saitama, Japan
| | - Hideo Yokota
- Image Processing Research Team, RIKEN Center for Advanced Photonics, Saitama, Japan.,Advanced Data Science Project, RIKEN Information R&D and Strategy Headquarters, Saitama, Japan
| | - Tomonori Yano
- Department of Gastroenterology and Endoscopy, National Cancer Center Hospital East, Chiba, Japan.,Medical Device Innovation Center, National Cancer Center Hospital East, Chiba, Japan
| |
Collapse
|
33
|
Luca M, Ciobanu A. Polyp detection in video colonoscopy using deep learning. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-219276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Video colonoscopy automatic processing is a challenge and further development of computer assisted diagnosis is very helpful in correctness assessment of the exam, in e-learning and training, for statistics on polyps’ malignity or in polyps’ survey. New devices and programming languages are emerging and deep learning begun already to furnish astonishing results, in the quest for high speed and optimal polyp detection software. This paper presents a successful attempt in detecting the intestinal polyps in real time video colonoscopy with deep learning, using Mobile Net.
Collapse
Affiliation(s)
- Mihaela Luca
- Institute of Computer Science, Romanian Academy Iaşi Branch, Iaşi, Romania
| | - Adrian Ciobanu
- Institute of Computer Science, Romanian Academy Iaşi Branch, Iaşi, Romania
| |
Collapse
|
34
|
|
35
|
Luo H, Li S, Zeng Y, Cheema H, Otegbeye E, Ahmed S, Chapman WC, Mutch M, Zhou C, Zhu Q. Human colorectal cancer tissue assessment using optical coherence tomography catheter and deep learning. JOURNAL OF BIOPHOTONICS 2022; 15:e202100349. [PMID: 35150067 PMCID: PMC9581715 DOI: 10.1002/jbio.202100349] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 01/16/2022] [Accepted: 02/09/2022] [Indexed: 05/02/2023]
Abstract
Optical coherence tomography (OCT) can differentiate normal colonic mucosa from neoplasia, potentially offering a new mechanism of endoscopic tissue assessment and biopsy targeting, with a high optical resolution and an imaging depth of ~1 mm. Recent advances in convolutional neural networks (CNN) have enabled application in ophthalmology, cardiology, and gastroenterology malignancy detection with high sensitivity and specificity. Here, we describe a miniaturized OCT catheter and a residual neural network (ResNet)-based deep learning model manufactured and trained to perform automatic image processing and real-time diagnosis of the OCT images. The OCT catheter has an outer diameter of 3.8 mm, a lateral resolution of ~7 μm, and an axial resolution of ~6 μm. A customized ResNet is utilized to classify OCT catheter colorectal images. An area under the receiver operating characteristic (ROC) curve (AUC) of 0.975 is achieved to distinguish between normal and cancerous colorectal tissue images.
Collapse
Affiliation(s)
- Hongbo Luo
- Department of Electrical & Systems Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Shuying Li
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Yifeng Zeng
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Hassam Cheema
- Department of Anatomic & Molecular Pathology, Washington University School of Medicine, St. Louis, Missouri, USA
| | - Ebunoluwa Otegbeye
- Department of Surgery, Washington University School of Medicine, St. Louis, Missouri, USA
| | - Safee Ahmed
- Department of Anatomic & Clinical Pathology, Washington University School of Medicine, St. Louis, Missouri, USA
| | - William C. Chapman
- Department of Surgery, Washington University School of Medicine, St. Louis, Missouri, USA
| | - Matthew Mutch
- Department of Surgery, Washington University School of Medicine, St. Louis, Missouri, USA
| | - Chao Zhou
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Quing Zhu
- Department of Electrical & Systems Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
- Department of Radiology, Washington University School of Medicine, St. Louis, Missouri, USA
| |
Collapse
|
36
|
Yang CB, Kim SH, Lim YJ. Preparation of image databases for artificial intelligence algorithm development in gastrointestinal endoscopy. Clin Endosc 2022; 55:594-604. [PMID: 35636749 PMCID: PMC9539300 DOI: 10.5946/ce.2021.229] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Accepted: 03/07/2022] [Indexed: 12/09/2022] Open
Abstract
Over the past decade, technological advances in deep learning have led to the introduction of artificial intelligence (AI) in medical imaging. The most commonly used structure in image recognition is the convolutional neural network, which mimics the action of the human visual cortex. The applications of AI in gastrointestinal endoscopy are diverse. Computer-aided diagnosis has achieved remarkable outcomes with recent improvements in machine-learning techniques and advances in computer performance. Despite some hurdles, the implementation of AI-assisted clinical practice is expected to aid endoscopists in real-time decision-making. In this summary, we reviewed state-of-the-art AI in the field of gastrointestinal endoscopy and offered a practical guide for building a learning image dataset for algorithm development.
Collapse
Affiliation(s)
- Chang Bong Yang
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang, Korea
| | - Sang Hoon Kim
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang, Korea
| | - Yun Jeong Lim
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang, Korea
| |
Collapse
|
37
|
Fati SM, Senan EM, Azar AT. Hybrid and Deep Learning Approach for Early Diagnosis of Lower Gastrointestinal Diseases. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22114079. [PMID: 35684696 PMCID: PMC9185306 DOI: 10.3390/s22114079] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 05/21/2022] [Accepted: 05/24/2022] [Indexed: 05/27/2023]
Abstract
Every year, nearly two million people die as a result of gastrointestinal (GI) disorders. Lower gastrointestinal tract tumors are one of the leading causes of death worldwide. Thus, early detection of the type of tumor is of great importance in the survival of patients. Additionally, removing benign tumors in their early stages has more risks than benefits. Video endoscopy technology is essential for imaging the GI tract and identifying disorders such as bleeding, ulcers, polyps, and malignant tumors. Videography generates 5000 frames, which require extensive analysis and take a long time to follow all frames. Thus, artificial intelligence techniques, which have a higher ability to diagnose and assist physicians in making accurate diagnostic decisions, solve these challenges. In this study, many multi-methodologies were developed, where the work was divided into four proposed systems; each system has more than one diagnostic method. The first proposed system utilizes artificial neural networks (ANN) and feed-forward neural networks (FFNN) algorithms based on extracting hybrid features by three algorithms: local binary pattern (LBP), gray level co-occurrence matrix (GLCM), and fuzzy color histogram (FCH) algorithms. The second proposed system uses pre-trained CNN models which are the GoogLeNet and AlexNet based on the extraction of deep feature maps and their classification with high accuracy. The third proposed method uses hybrid techniques consisting of two blocks: the first block of CNN models (GoogLeNet and AlexNet) to extract feature maps; the second block is the support vector machine (SVM) algorithm for classifying deep feature maps. The fourth proposed system uses ANN and FFNN based on the hybrid features between CNN models (GoogLeNet and AlexNet) and LBP, GLCM and FCH algorithms. All the proposed systems achieved superior results in diagnosing endoscopic images for the early detection of lower gastrointestinal diseases. All systems produced promising results; the FFNN classifier based on the hybrid features extracted by GoogLeNet, LBP, GLCM and FCH achieved an accuracy of 99.3%, precision of 99.2%, sensitivity of 99%, specificity of 100%, and AUC of 99.87%.
Collapse
Affiliation(s)
- Suliman Mohamed Fati
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia;
| | - Ebrahim Mohammed Senan
- Department of Computer Science & Information Technology, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad 431004, India;
| | - Ahmad Taher Azar
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia;
- Faculty of Computers and Artificial Intelligence, Benha University, Benha 13518, Egypt
| |
Collapse
|
38
|
Taghiakbari M, Hammar C, Frenn M, Djinbachian R, Pohl H, Deslandres E, Bouchard S, Bouin M, von Renteln D. Non-optical polyp-based resect and discard strategy: A prospective clinical study. World J Gastroenterol 2022; 28:2137-2147. [PMID: 35664039 PMCID: PMC9134134 DOI: 10.3748/wjg.v28.i19.2137] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 03/21/2022] [Accepted: 04/09/2022] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Post-polypectomy surveillance intervals are currently determined based on pathology results. AIM To evaluate a polyp-based resect and discard model that assigns surveillance intervals based solely on polyp number and size. METHODS Patients undergoing elective colonoscopies at the Montreal University Medical Center were enrolled prospectively. The polyp-based strategy was used to assign the next surveillance interval using polyp size and number. Surveillance intervals were also assigned using optical diagnosis for small polyps (< 10 mm). The primary outcome was surveillance interval agreement between the polyp-based model, optical diagnosis, and the pathology-based reference standard using the 2020 United States Multi-Society Task Force guidelines. Secondary outcomes included the proportion of reduction in required histopathology evaluations and proportion of immediate post-colonoscopy recommendations provided to patients. RESULTS Of 944 patients (mean age 62.6 years, 49.3% male, 933 polyps) were enrolled. The surveillance interval agreement for the polyp-based strategy was 98.0% [95% confidence interval (CI): 0.97-0.99] compared with pathology-based assignment. Optical diagnosis-based intervals achieved 95.8% (95%CI: 0.94-0.97) agreement with pathology. When using the polyp-based strategy and optical diagnosis, the need for pathology assessment was reduced by 87.8% and 70.6%, respectively. The polyp-based strategy provided 93.7% of patients with immediate surveillance interval recommendations vs 76.1% for optical diagnosis. CONCLUSION The polyp-based strategy achieved almost perfect surveillance interval agreement compared with pathology-based assignments, significantly reduced the number of required pathology evaluations, and provided most patients with immediate surveillance interval recommendations.
Collapse
Affiliation(s)
- Mahsa Taghiakbari
- Department of Gastroenterology, Montreal University Hospital Research Center (CRCHUM), Montréal H2X 0A9, Quebec, Canada
| | - Celia Hammar
- Department of Gastroenterology, Montreal University Hospital Research Center (CRCHUM), Montréal H2X 0A9, Quebec, Canada
- Department of Gastroenterology, University of Montreal, Faculty of Medicine, Montreal H2X 0A9, Quebce, Canada
| | - Mira Frenn
- Department of Gastroenterology, Montreal University Hospital Research Center (CRCHUM), Montréal H2X 0A9, Quebec, Canada
- Department of Gastroenterology, University of Montreal, Faculty of Medicine, Montreal H2X 0A9, Quebce, Canada
| | - Roupen Djinbachian
- Department of Gastroenterology, Montreal University Hospital Research Center (CRCHUM), Montréal H2X 0A9, Quebec, Canada
- Department of Internal Medicine, University of Montreal Hospital Center (CHUM), Montreal H2X 0A9, Quebec, Canada
| | - Heiko Pohl
- Department of Medicine, Veterans Affairs Medical Center, White River Junction, VT 05009, United States
- Department of Gastroenterology, Dartmouth Geisel School of Medicine and The Dartmouth Institute, Hanover, NH 03755, United States
| | - Erik Deslandres
- Department of Gastroenterology, Montreal University Hospital Research Center (CRCHUM), Montréal H2X 0A9, Quebec, Canada
| | - Simon Bouchard
- Department of Gastroenterology, Montreal University Hospital Research Center (CRCHUM), Montréal H2X 0A9, Quebec, Canada
| | - Mickael Bouin
- Department of Gastroenterology, Montreal University Hospital Research Center (CRCHUM), Montréal H2X 0A9, Quebec, Canada
| | - Daniel von Renteln
- Department of Gastroenterology, Montreal University Hospital Research Center (CRCHUM), Montréal H2X 0A9, Quebec, Canada
| |
Collapse
|
39
|
A deep ensemble learning method for colorectal polyp classification with optimized network parameters. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03689-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractColorectal Cancer (CRC), a leading cause of cancer-related deaths, can be abated by timely polypectomy. Computer-aided classification of polyps helps endoscopists to resect timely without submitting the sample for histology. Deep learning-based algorithms are promoted for computer-aided colorectal polyp classification. However, the existing methods do not accommodate any information on hyperparametric settings essential for model optimisation. Furthermore, unlike the polyp types, i.e., hyperplastic and adenomatous, the third type, serrated adenoma, is difficult to classify due to its hybrid nature. Moreover, automated assessment of polyps is a challenging task due to the similarities in their patterns; therefore, the strength of individual weak learners is combined to form a weighted ensemble model for an accurate classification model by establishing the optimised hyperparameters. In contrast to existing studies on binary classification, multiclass classification require evaluation through advanced measures. This study compared six existing Convolutional Neural Networks in addition to transfer learning and opted for optimum performing architecture only for ensemble models. The performance evaluation on UCI and PICCOLO dataset of the proposed method in terms of accuracy (96.3%, 81.2%), precision (95.5%, 82.4%), recall (97.2%, 81.1%), F1-score (96.3%, 81.3%) and model reliability using Cohen’s Kappa Coefficient (0.94, 0.62) shows the superiority over existing models. The outcomes of experiments by other studies on the same dataset yielded 82.5% accuracy with 72.7% recall by SVM and 85.9% accuracy with 87.6% recall by other deep learning methods. The proposed method demonstrates that a weighted ensemble of optimised networks along with data augmentation significantly boosts the performance of deep learning-based CAD.
Collapse
|
40
|
Sharma P, Balabantaray BK, Bora K, Mallik S, Kasugai K, Zhao Z. An Ensemble-Based Deep Convolutional Neural Network for Computer-Aided Polyps Identification From Colonoscopy. Front Genet 2022; 13:844391. [PMID: 35559018 PMCID: PMC9086187 DOI: 10.3389/fgene.2022.844391] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 03/14/2022] [Indexed: 01/16/2023] Open
Abstract
Colorectal cancer (CRC) is the third leading cause of cancer death globally. Early detection and removal of precancerous polyps can significantly reduce the chance of CRC patient death. Currently, the polyp detection rate mainly depends on the skill and expertise of gastroenterologists. Over time, unidentified polyps can develop into cancer. Machine learning has recently emerged as a powerful method in assisting clinical diagnosis. Several classification models have been proposed to identify polyps, but their performance has not been comparable to an expert endoscopist yet. Here, we propose a multiple classifier consultation strategy to create an effective and powerful classifier for polyp identification. This strategy benefits from recent findings that different classification models can better learn and extract various information within the image. Therefore, our Ensemble classifier can derive a more consequential decision than each individual classifier. The extracted combined information inherits the ResNet's advantage of residual connection, while it also extracts objects when covered by occlusions through depth-wise separable convolution layer of the Xception model. Here, we applied our strategy to still frames extracted from a colonoscopy video. It outperformed other state-of-the-art techniques with a performance measure greater than 95% in each of the algorithm parameters. Our method will help researchers and gastroenterologists develop clinically applicable, computational-guided tools for colonoscopy screening. It may be extended to other clinical diagnoses that rely on image.
Collapse
Affiliation(s)
- Pallabi Sharma
- Department of Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, India
| | - Bunil Kumar Balabantaray
- Department of Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, India
| | - Kangkana Bora
- Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Saurav Mallik
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Kunio Kasugai
- Department of Gastroenterology, Aichi Medical University, Nagakute, Japan
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Human Genetics Center, School of Public Health, The University of Texas Health Science Center at Houston, Houston, TX, United States
- MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, TX, United States
| |
Collapse
|
41
|
Detection and Classification of Colorectal Polyp Using Deep Learning. BIOMED RESEARCH INTERNATIONAL 2022; 2022:2805607. [PMID: 35463989 PMCID: PMC9033358 DOI: 10.1155/2022/2805607] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 03/05/2022] [Accepted: 03/11/2022] [Indexed: 11/17/2022]
Abstract
Colorectal Cancer (CRC) is the third most dangerous cancer in the world and also increasing day by day. So, timely and accurate diagnosis is required to save the life of patients. Cancer grows from polyps which can be either cancerous or noncancerous. So, if the cancerous polyps are detected accurately and removed on time, then the dangerous consequences of cancer can be reduced to a large extent. The colonoscopy is used to detect the presence of colorectal polyps. However, manual examinations performed by experts are prone to various errors. Therefore, some researchers have utilized machine and deep learning-based models to automate the diagnosis process. However, existing models suffer from overfitting and gradient vanishing problems. To overcome these problems, a convolutional neural network- (CNN-) based deep learning model is proposed. Initially, guided image filter and dynamic histogram equalization approaches are used to filter and enhance the colonoscopy images. Thereafter, Single Shot MultiBox Detector (SSD) is used to efficiently detect and classify colorectal polyps from colonoscopy images. Finally, fully connected layers with dropouts are used to classify the polyp classes. Extensive experimental results on benchmark dataset show that the proposed model achieves significantly better results than the competitive models. The proposed model can detect and classify colorectal polyps from the colonoscopy images with 92% accuracy.
Collapse
|
42
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 89] [Impact Index Per Article: 44.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
43
|
Tang CP, Hsieh CH, Lin TL. Computer-Aided Image Enhanced Endoscopy Automated System to Boost Polyp and Adenoma Detection Accuracy. Diagnostics (Basel) 2022; 12:diagnostics12040968. [PMID: 35454016 PMCID: PMC9025080 DOI: 10.3390/diagnostics12040968] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 04/03/2022] [Accepted: 04/08/2022] [Indexed: 01/04/2023] Open
Abstract
Colonoscopy is the gold standard to detect colon polyps prematurely. Early detection, characterization and resection of polyps decrease colon cancer incidence. Colon polyp missing rate remains high despite novel methods development. Narrowed-band imaging (NBI) is one of the image enhance techniques used to boost polyp detection and characterization, which uses special filters to enhance the contrast of the mucosa surface and vascular pattern of the polyp. However, the single-button-activated system is not convenient for a full-time colonoscopy operation. We selected three methods to simulate the NBI system: Color Transfer with Mean Shift (CTMS), Multi-scale Retinex with Color Restoration (MSRCR), and Gamma and Sigmoid Conversions (GSC). The results show that the classification accuracy using the original images is the lowest. All color transfer methods outperform the original images approach. Our results verified that the color transfer has a positive impact on the polyp identification and classification task. Combined analysis results of the mAP and the accuracy show an excellent performance of the MSRCR method.
Collapse
Affiliation(s)
- Chia-Pei Tang
- Division of Gastroenterology, Department of Internal Medicine, Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Chiayi City 62224, Taiwan;
- School of Medicine, Tzu Chi University, Hualien City 97004, Taiwan
| | - Chen-Hung Hsieh
- Department of Management Information System, National Chiayi University, Chiayi City 600023, Taiwan;
| | - Tu-Liang Lin
- Department of Management Information System, National Chiayi University, Chiayi City 600023, Taiwan;
- Correspondence:
| |
Collapse
|
44
|
Qiu H, Ding S, Liu J, Wang L, Wang X. Applications of Artificial Intelligence in Screening, Diagnosis, Treatment, and Prognosis of Colorectal Cancer. Curr Oncol 2022; 29:1773-1795. [PMID: 35323346 PMCID: PMC8947571 DOI: 10.3390/curroncol29030146] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 02/28/2022] [Accepted: 03/03/2022] [Indexed: 12/29/2022] Open
Abstract
Colorectal cancer (CRC) is one of the most common cancers worldwide. Accurate early detection and diagnosis, comprehensive assessment of treatment response, and precise prediction of prognosis are essential to improve the patients’ survival rate. In recent years, due to the explosion of clinical and omics data, and groundbreaking research in machine learning, artificial intelligence (AI) has shown a great application potential in clinical field of CRC, providing new auxiliary approaches for clinicians to identify high-risk patients, select precise and personalized treatment plans, as well as to predict prognoses. This review comprehensively analyzes and summarizes the research progress and clinical application value of AI technologies in CRC screening, diagnosis, treatment, and prognosis, demonstrating the current status of the AI in the main clinical stages. The limitations, challenges, and future perspectives in the clinical implementation of AI are also discussed.
Collapse
Affiliation(s)
- Hang Qiu
- Big Data Research Center, University of Electronic Science and Technology of China, Chengdu 611731, China;
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
- Correspondence: (H.Q.); (X.W.)
| | - Shuhan Ding
- School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14853, USA;
| | - Jianbo Liu
- West China School of Medicine, Sichuan University, Chengdu 610041, China;
- Department of Gastrointestinal Surgery, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Liya Wang
- Big Data Research Center, University of Electronic Science and Technology of China, Chengdu 611731, China;
| | - Xiaodong Wang
- West China School of Medicine, Sichuan University, Chengdu 610041, China;
- Department of Gastrointestinal Surgery, West China Hospital, Sichuan University, Chengdu 610041, China
- Correspondence: (H.Q.); (X.W.)
| |
Collapse
|
45
|
Goel N, Kaur S, Gunjan D, Mahapatra SJ. Dilated CNN for abnormality detection in wireless capsule endoscopy images. Soft comput 2022. [DOI: 10.1007/s00500-021-06546-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
46
|
Wang W, Yang X, Li X, Tang J. Convolutional‐capsule network for gastrointestinal endoscopy image classification. INT J INTELL SYST 2022. [DOI: 10.1002/int.22815] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Affiliation(s)
- Wei Wang
- School of Computer Science and Engineering, Nanjing University of Science and Technology Nanjing Jiangsu China
| | - Xin Yang
- School of Electronic Information and Communications, Huazhong University of Science and Technology Wuhan Hubei China
| | - Xin Li
- Department of Radiology Union Hospital, Tongji Medical College, Huazhong University of Science and Technology Wuhan Hubei China
- Hubei Province Key Laboratory of Molecular Imaging Wuhan Hubei China
| | - Jinhui Tang
- School of Computer Science and Engineering, Nanjing University of Science and Technology Nanjing Jiangsu China
| |
Collapse
|
47
|
Saeed T, Kiong Loo C, Shahreeza Safiruz Kassim M. Ensembles of Deep Learning Framework for Stomach Abnormalities Classification. COMPUTERS, MATERIALS & CONTINUA 2022; 70:4357-4372. [DOI: 10.32604/cmc.2022.019076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 06/18/2021] [Indexed: 09/01/2023]
|
48
|
Luca AR, Ursuleanu TF, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Grigorovici A. Impact of quality, type and volume of data used by deep learning models in the analysis of medical images. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
49
|
Taghiakbari M, Mori Y, von Renteln D. Artificial intelligence-assisted colonoscopy: A review of current state of practice and research. World J Gastroenterol 2021; 27:8103-8122. [PMID: 35068857 PMCID: PMC8704267 DOI: 10.3748/wjg.v27.i47.8103] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 08/22/2021] [Accepted: 12/08/2021] [Indexed: 02/06/2023] Open
Abstract
Colonoscopy is an effective screening procedure in colorectal cancer prevention programs; however, colonoscopy practice can vary in terms of lesion detection, classification, and removal. Artificial intelligence (AI)-assisted decision support systems for endoscopy is an area of rapid research and development. The systems promise improved detection, classification, screening, and surveillance for colorectal polyps and cancer. Several recently developed applications for AI-assisted colonoscopy have shown promising results for the detection and classification of colorectal polyps and adenomas. However, their value for real-time application in clinical practice has yet to be determined owing to limitations in the design, validation, and testing of AI models under real-life clinical conditions. Despite these current limitations, ambitious attempts to expand the technology further by developing more complex systems capable of assisting and supporting the endoscopist throughout the entire colonoscopy examination, including polypectomy procedures, are at the concept stage. However, further work is required to address the barriers and challenges of AI integration into broader colonoscopy practice, to navigate the approval process from regulatory organizations and societies, and to support physicians and patients on their journey to accepting the technology by providing strong evidence of its accuracy and safety. This article takes a closer look at the current state of AI integration into the field of colonoscopy and offers suggestions for future research.
Collapse
Affiliation(s)
- Mahsa Taghiakbari
- Department of Gastroenterology, CRCHUM, Montreal H2X 0A9, Quebec, Canada
| | - Yuichi Mori
- Clinical Effectiveness Research Group, University of Oslo, Oslo 0450, Norway
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama 224-8503, Japan
| | - Daniel von Renteln
- Department of Gastroenterology, CRCHUM, Montreal H2X 0A9, Quebec, Canada
| |
Collapse
|
50
|
Wang W, Mohseni P, Kilgore KL, Najafizadeh L. Cuff-less Blood Pressure Estimation from Photoplethysmography via Visibility Graph and Transfer Learning. IEEE J Biomed Health Inform 2021; 26:2075-2085. [PMID: 34784289 DOI: 10.1109/jbhi.2021.3128383] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper presents a new solution that enables the use of transfer learning for cuff-less blood pressure (BP) monitoring via short duration of photoplethysmogram (PPG). The proposed method estimates BP with low computational budget by 1) creating images from segments of PPG via visibility graph (VG) that preserves the temporal information of the PPG waveform, 2) using pre-trained deep convolutional neural network (CNN) to extract feature vectors from VG images, and 3) solving for the weights and bias between the feature vectors and the reference BPs with ridge regression. Using the University of California Irvine (UCI) database consisting of 348 records, the proposed method achieves a best error performance of 0.008.46 mmHg for systolic blood pressure (SBP), and -0.045.36 mmHg for diastolic blood pressure (DBP), respectively, in terms of the mean error (ME) and the standard deviation (SD) of error, ranking grade B for SBP and grade A for DBP under the British Hypertension Society (BHS) protocol. Our novel data-driven method offers a computationally-efficient end-to-end solution for rapid and user-friendly cuff-less PPG-based BP estimation.
Collapse
|