1
|
Qian YF, Guo WL. Development and validation of a deep learning algorithm for prediction of pediatric recurrent intussusception in ultrasound images and radiographs. BMC Med Imaging 2025; 25:67. [PMID: 40033220 DOI: 10.1186/s12880-025-01582-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2024] [Accepted: 02/05/2025] [Indexed: 03/05/2025] Open
Abstract
PURPOSES To develop a predictive model for recurrent intussusception based on abdominal ultrasound (US) images and abdominal radiographs. METHODS A total of 3665 cases of intussusception were retrospectively collected from January 2017 to December 2022. The cohort was randomly assigned to training and validation sets at a 6:4 ratio. Two types of images were processed: abdominal grayscale US images and abdominal radiographs. These images served as inputs for the deep learning algorithm and were individually processed by five detection models for training, with each model predicting its respective categories and probabilities. The optimal models were selected individually for decision fusion to obtain the final predicted categories and their probabilities. RESULTS With US, the VGG11 model showed the best performance, achieving an area under the receiver operating characteristic curve (AUC) of 0.669 (95% CI: 0.635-0.702). In contrast, with radiographs, the ResNet18 model excelled with an AUC of 0.809 (95% CI: 0.776-0.841). We then employed two fusion methods. In the averaging fusion method, the two models were combined to reach a diagnostic decision. Specifically, a soft voting scheme was used to average the probabilities predicted by each model, resulting in an AUC of 0.877 (95% CI: 0.846-0.908). In the stacking fusion method, a meta-model was built based on the predictions of the two optimal models. This approach notably enhanced the overall predictive performance, with LightGBM emerging as the top performer, achieving an AUC of 0.897 (95% CI: 0.869-0.925). Both fusion methods demonstrated excellent performance. CONCLUSIONS Deep learning algorithms developed using multimodal medical imaging may help predict recurrent intussusception. CLINICAL TRIAL NUMBER Not applicable.
Collapse
Affiliation(s)
- Yu-Feng Qian
- Department of Radiology, Children's Hospital of Soochow University, Suzhou, China
| | - Wan-Liang Guo
- Department of Radiology, Children's Hospital of Soochow University, Suzhou, China.
| |
Collapse
|
2
|
Nie Z, Xu M, Wang Z, Lu X, Song W. A Review of Application of Deep Learning in Endoscopic Image Processing. J Imaging 2024; 10:275. [PMID: 39590739 PMCID: PMC11595772 DOI: 10.3390/jimaging10110275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2024] [Revised: 10/24/2024] [Accepted: 10/29/2024] [Indexed: 11/28/2024] Open
Abstract
Deep learning, particularly convolutional neural networks (CNNs), has revolutionized endoscopic image processing, significantly enhancing the efficiency and accuracy of disease diagnosis through its exceptional ability to extract features and classify complex patterns. This technology automates medical image analysis, alleviating the workload of physicians and enabling a more focused and personalized approach to patient care. However, despite these remarkable achievements, there are still opportunities to further optimize deep learning models for endoscopic image analysis, including addressing limitations such as the requirement for large annotated datasets and the challenge of achieving higher diagnostic precision, particularly for rare or subtle pathologies. This review comprehensively examines the profound impact of deep learning on endoscopic image processing, highlighting its current strengths and limitations. It also explores potential future directions for research and development, outlining strategies to overcome existing challenges and facilitate the integration of deep learning into clinical practice. Ultimately, the goal is to contribute to the ongoing advancement of medical imaging technologies, leading to more accurate, personalized, and optimized medical care for patients.
Collapse
Affiliation(s)
- Zihan Nie
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (Z.N.); (M.X.); (Z.W.); (X.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, Shandong University, Jinan 250061, China
| | - Muhao Xu
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (Z.N.); (M.X.); (Z.W.); (X.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, Shandong University, Jinan 250061, China
| | - Zhiyong Wang
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (Z.N.); (M.X.); (Z.W.); (X.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, Shandong University, Jinan 250061, China
| | - Xiaoqi Lu
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (Z.N.); (M.X.); (Z.W.); (X.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, Shandong University, Jinan 250061, China
| | - Weiye Song
- School of Mechanical Engineering, Shandong University, Jinan 250061, China; (Z.N.); (M.X.); (Z.W.); (X.L.)
- Key Laboratory of High Efficiency and Clean Mechanical Manufacture of Ministry of Education, Shandong University, Jinan 250061, China
| |
Collapse
|
3
|
Kim JE, Choi YH, Lee YC, Seong G, Song JH, Kim TJ, Kim ER, Hong SN, Chang DK, Kim YH, Shin SY. Deep learning model for distinguishing Mayo endoscopic subscore 0 and 1 in patients with ulcerative colitis. Sci Rep 2023; 13:11351. [PMID: 37443370 PMCID: PMC10344868 DOI: 10.1038/s41598-023-38206-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Accepted: 07/05/2023] [Indexed: 07/15/2023] Open
Abstract
The aim of this study was to address the issue of differentiating between Mayo endoscopic subscore (MES) 0 and MES 1 using a deep learning model. A dataset of 492 ulcerative colitis (UC) patients who demonstrated MES improvement between January 2018 and December 2019 at Samsung Medical Center was utilized. Specifically, two representative images of the colon and rectum were selected from each patient, resulting in a total of 984 images for analysis. The deep learning model utilized in this study consisted of a convolutional neural network (CNN)-based encoder, with two auxiliary classifiers for the colon and rectum, as well as a final MES classifier that combined image features from both inputs. In the internal test, the model achieved an F1-score of 0.92, surpassing the performance of seven novice classifiers by an average margin of 0.11, and outperforming their consensus by 0.02. The area under the receiver operating characteristic curve (AUROC) was calculated to be 0.97 when considering MES 1 as positive, with an area under the precision-recall curve (AUPRC) of 0.98. In the external test using the Hyperkvasir dataset, the model achieved an F1-score of 0.89, AUROC of 0.86, and AUPRC of 0.97. The results demonstrate that the proposed CNN-based model, which integrates image features from both the colon and rectum, exhibits superior performance in accurately discriminating between MES 0 and MES 1 in patients with UC.
Collapse
Affiliation(s)
- Ji Eun Kim
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-gu, Seoul, 06351, South Korea
| | - Yoon Ho Choi
- Department of Artificial Intelligence and Informatics Research, Mayo Clinic, Jacksonville, FL, USA
- Department of Digital Health, Samsung Advanced Institute for Health Sciences and Technology, Sungkyunkwan University, 81 Irwon-Ro, Gangnam-gu, Seoul, 06351, South Korea
| | - Yeong Chan Lee
- Research Institute for Future Medicine, Samsung Medical Center, Seoul, South Korea
| | - Gyeol Seong
- Department of Medicine, Nowon Eulji Medical Center, Eulji University, Seoul, South Korea
| | - Joo Hye Song
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-gu, Seoul, 06351, South Korea
| | - Tae Jun Kim
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-gu, Seoul, 06351, South Korea
| | - Eun Ran Kim
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-gu, Seoul, 06351, South Korea
| | - Sung Noh Hong
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-gu, Seoul, 06351, South Korea
| | - Dong Kyung Chang
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-gu, Seoul, 06351, South Korea
| | - Young-Ho Kim
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-gu, Seoul, 06351, South Korea.
| | - Soo-Yong Shin
- Department of Digital Health, Samsung Advanced Institute for Health Sciences and Technology, Sungkyunkwan University, 81 Irwon-Ro, Gangnam-gu, Seoul, 06351, South Korea.
| |
Collapse
|
4
|
Bakasa W, Viriri S. VGG16 Feature Extractor with Extreme Gradient Boost Classifier for Pancreas Cancer Prediction. J Imaging 2023; 9:138. [PMID: 37504815 PMCID: PMC10381878 DOI: 10.3390/jimaging9070138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 06/19/2023] [Accepted: 07/04/2023] [Indexed: 07/29/2023] Open
Abstract
The prognosis of patients with pancreatic ductal adenocarcinoma (PDAC) is greatly improved by an early and accurate diagnosis. Several studies have created automated methods to forecast PDAC development utilising various medical imaging modalities. These papers give a general overview of the classification, segmentation, or grading of many cancer types utilising conventional machine learning techniques and hand-engineered characteristics, including pancreatic cancer. This study uses cutting-edge deep learning techniques to identify PDAC utilising computerised tomography (CT) medical imaging modalities. This work suggests that the hybrid model VGG16-XGBoost (VGG16-backbone feature extractor and Extreme Gradient Boosting-classifier) for PDAC images. According to studies, the proposed hybrid model performs better, obtaining an accuracy of 0.97 and a weighted F1 score of 0.97 for the dataset under study. The experimental validation of the VGG16-XGBoost model uses the Cancer Imaging Archive (TCIA) public access dataset, which has pancreas CT images. The results of this study can be extremely helpful for PDAC diagnosis from computerised tomography (CT) pancreas images, categorising them into five different tumours (T), node (N), and metastases (M) (TNM) staging system class labels, which are T0, T1, T2, T3, and T4.
Collapse
|
5
|
Du J, Tao C, Xue S, Zhang Z. Joint Diagnostic Method of Tumor Tissue Based on Hyperspectral Spectral-Spatial Transfer Features. Diagnostics (Basel) 2023; 13:2002. [PMID: 37370897 DOI: 10.3390/diagnostics13122002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 05/23/2023] [Accepted: 06/06/2023] [Indexed: 06/29/2023] Open
Abstract
In order to improve the clinical application of hyperspectral technology in the pathological diagnosis of tumor tissue, a joint diagnostic method based on spectral-spatial transfer features was established by simulating the actual clinical diagnosis process and combining micro-hyperspectral imaging with large-scale pathological data. In view of the limited sample volume of medical hyperspectral data, a multi-data transfer model pre-trained on conventional pathology datasets was applied to the classification task of micro-hyperspectral images, to explore the differences in spectral-spatial transfer features in the wavelength of 410-900 nm between tumor tissues and normal tissues. The experimental results show that the spectral-spatial transfer convolutional neural network (SST-CNN) achieved a classification accuracy of 95.46% for the gastric cancer dataset and 95.89% for the thyroid cancer dataset, thus outperforming models trained on single conventional digital pathology and single hyperspectral data. The joint diagnostic method established based on SST-CNN can complete the interpretation of a section of data in 3 min, thus providing a new technical solution for the rapid diagnosis of pathology. This study also explored problems involving the correlation between tumor tissues and typical spectral-spatial features, as well as the efficient transformation of conventional pathological and transfer spectral-spatial features, which solidified the theoretical research on hyperspectral pathological diagnosis.
Collapse
Affiliation(s)
- Jian Du
- Key Laboratory of Spectral Imaging Technology CAS, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an 710119, China
- Xi'an Key Laboratory for Biomedical Spectroscopy, Xi'an 710119, China
| | - Chenglong Tao
- Key Laboratory of Spectral Imaging Technology CAS, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an 710119, China
- Xi'an Key Laboratory for Biomedical Spectroscopy, Xi'an 710119, China
| | - Shuang Xue
- Key Laboratory of Spectral Imaging Technology CAS, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an 710119, China
- Xi'an Key Laboratory for Biomedical Spectroscopy, Xi'an 710119, China
| | - Zhoufeng Zhang
- Key Laboratory of Spectral Imaging Technology CAS, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an 710119, China
- Xi'an Key Laboratory for Biomedical Spectroscopy, Xi'an 710119, China
| |
Collapse
|
6
|
Yengec-Tasdemir SB, Aydin Z, Akay E, Dogan S, Yilmaz B. Improved classification of colorectal polyps on histopathological images with ensemble learning and stain normalization. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 232:107441. [PMID: 36905748 DOI: 10.1016/j.cmpb.2023.107441] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Revised: 02/05/2023] [Accepted: 02/21/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Early detection of colon adenomatous polyps is critically important because correct detection of it significantly reduces the potential of developing colon cancers in the future. The key challenge in the detection of adenomatous polyps is differentiating it from its visually similar counterpart, non-adenomatous tissues. Currently, it solely depends on the experience of the pathologist. To assist the pathologists, the objective of this work is to provide a novel non-knowledge-based Clinical Decision Support System (CDSS) for improved detection of adenomatous polyps on colon histopathology images. METHODS The domain shift problem arises when the train and test data are coming from different distributions of diverse settings and unequal color levels. This problem, which can be tackled by stain normalization techniques, restricts the machine learning models to attain higher classification accuracies. In this work, the proposed method integrates stain normalization techniques with ensemble of competitively accurate, scalable and robust variants of CNNs, ConvNexts. The improvement is empirically analyzed for five widely employed stain normalization techniques. The classification performance of the proposed method is evaluated on three datasets comprising more than 10k colon histopathology images. RESULTS The comprehensive experiments demonstrate that the proposed method outperforms the state-of-the-art deep convolutional neural network based models by attaining 95% classification accuracy on the curated dataset, and 91.1% and 90% on EBHI and UniToPatho public datasets, respectively. CONCLUSIONS These results show that the proposed method can accurately classify colon adenomatous polyps on histopathology images. It retains remarkable performance scores even for different datasets coming from different distributions. This indicates that the model has a notable generalization ability.
Collapse
Affiliation(s)
- Sena Busra Yengec-Tasdemir
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast, BT39DT, United Kingdom; Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey.
| | - Zafer Aydin
- Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey; Department of Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey
| | - Ebru Akay
- Pathology Clinic, Kayseri City Hospital, Kayseri, 38080, Turkey
| | - Serkan Dogan
- Gastroenterology Clinic, Kayseri City Hospital, Kayseri, 38080, Turkey
| | - Bulent Yilmaz
- Department of Electrical Engineering, Gulf University for Science and Technology, Mishref, 40005, Kuwait; Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey.
| |
Collapse
|
7
|
Cao R, Tang L, Fang M, Zhong L, Wang S, Gong L, Li J, Dong D, Tian J. Artificial intelligence in gastric cancer: applications and challenges. Gastroenterol Rep (Oxf) 2022; 10:goac064. [PMID: 36457374 PMCID: PMC9707405 DOI: 10.1093/gastro/goac064] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 09/27/2022] [Accepted: 10/18/2022] [Indexed: 08/10/2023] Open
Abstract
Gastric cancer (GC) is one of the most common malignant tumors with high mortality. Accurate diagnosis and treatment decisions for GC rely heavily on human experts' careful judgments on medical images. However, the improvement of the accuracy is hindered by imaging conditions, limited experience, objective criteria, and inter-observer discrepancies. Recently, the developments of machine learning, especially deep-learning algorithms, have been facilitating computers to extract more information from data automatically. Researchers are exploring the far-reaching applications of artificial intelligence (AI) in various clinical practices, including GC. Herein, we aim to provide a broad framework to summarize current research on AI in GC. In the screening of GC, AI can identify precancerous diseases and assist in early cancer detection with endoscopic examination and pathological confirmation. In the diagnosis of GC, AI can support tumor-node-metastasis (TNM) staging and subtype classification. For treatment decisions, AI can help with surgical margin determination and prognosis prediction. Meanwhile, current approaches are challenged by data scarcity and poor interpretability. To tackle these problems, more regulated data, unified processing procedures, and advanced algorithms are urgently needed to build more accurate and robust AI models for GC.
Collapse
Affiliation(s)
| | | | - Mengjie Fang
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, P. R. China
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, P. R. China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, P. R. China
| | - Lianzhen Zhong
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, P. R. China
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, P. R. China
| | - Siwen Wang
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, P. R. China
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, P. R. China
| | - Lixin Gong
- College of Medicine and Biological Information Engineering School, Northeastern University, Shenyang, Liaoning, P. R. China
| | - Jiazheng Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Radiology Department, Peking University Cancer Hospital & Institute, Beijing, P. R. China
| | - Di Dong
- Corresponding authors. Di Dong, CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, 95 Zhongguancun East Road, Beijing 100190, P. R. China. Tel: +86-13811833760; ; Jie Tian, Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, P. R. China. Tel: +86-10-82618465;
| | - Jie Tian
- Corresponding authors. Di Dong, CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, 95 Zhongguancun East Road, Beijing 100190, P. R. China. Tel: +86-13811833760; ; Jie Tian, Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, P. R. China. Tel: +86-10-82618465;
| |
Collapse
|
8
|
Yoo BS, Houston KV, D'Souza SM, Elmahdi A, Davis I, Vilela A, Parekh PJ, Johnson DA. Advances and horizons for artificial intelligence of endoscopic screening and surveillance of gastric and esophageal disease. Artif Intell Med Imaging 2022; 3:70-86. [DOI: 10.35711/aimi.v3.i3.70] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 05/18/2022] [Accepted: 06/20/2022] [Indexed: 02/06/2023] Open
Affiliation(s)
- Byung Soo Yoo
- Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Kevin V Houston
- Department of Internal Medicine, Virginia Commonwealth University, Richmond, VA 23298, United States
| | - Steve M D'Souza
- Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Alsiddig Elmahdi
- Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Isaac Davis
- Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Ana Vilela
- Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - Parth J Parekh
- Division of Gastroenterology, Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| | - David A Johnson
- Division of Gastroenterology, Department of Internal Medicine, Eastern Virginia Medical School, Norfolk, VA 23507, United States
| |
Collapse
|
9
|
Fu XY, Mao XL, Chen YH, You NN, Song YQ, Zhang LH, Cai Y, Ye XN, Ye LP, Li SW. The Feasibility of Applying Artificial Intelligence to Gastrointestinal Endoscopy to Improve the Detection Rate of Early Gastric Cancer Screening. Front Med (Lausanne) 2022; 9:886853. [PMID: 35652070 PMCID: PMC9150174 DOI: 10.3389/fmed.2022.886853] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 04/06/2022] [Indexed: 12/24/2022] Open
Abstract
Convolutional neural networks in the field of artificial intelligence show great potential in image recognition. It assisted endoscopy to improve the detection rate of early gastric cancer. The 5-year survival rate for advanced gastric cancer is less than 30%, while the 5-year survival rate for early gastric cancer is more than 90%. Therefore, earlier screening for gastric cancer can lead to a better prognosis. However, the detection rate of early gastric cancer in China has been extremely low due to many factors, such as the presence of gastric cancer without obvious symptoms, difficulty identifying lesions by the naked eye, and a lack of experience among endoscopists. The introduction of artificial intelligence can help mitigate these shortcomings and greatly improve the accuracy of screening. According to relevant reports, the sensitivity and accuracy of artificial intelligence trained on deep cirrocumulus neural networks are better than those of endoscopists, and evaluations also take less time, which can greatly reduce the burden on endoscopists. In addition, artificial intelligence can also perform real-time detection and feedback on the inspection process of the endoscopist to standardize the operation of the endoscopist. AI has also shown great potential in training novice endoscopists. With the maturity of AI technology, AI has the ability to improve the detection rate of early gastric cancer in China and reduce the death rate of gastric cancer related diseases in China.
Collapse
Affiliation(s)
- Xin-yu Fu
- Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, China
| | - Xin-li Mao
- Key Laboratory of Minimally Invasive Techniques and Rapid Rehabilitation of Digestive System Tumor of Zhejiang Province, Taizhou Hospital Affiliated to Wenzhou Medical University, Linhai, China
- Department of Gastroenterology, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, China
- Institute of Digestive Disease, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, China
| | - Ya-hong Chen
- Health Management Center, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, China
| | - Ning-ning You
- Department of Gastroenterology, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, China
| | - Ya-qi Song
- Taizhou Hospital, Zhejiang University, Linhai, China
| | - Li-hui Zhang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, China
| | - Yue Cai
- Department of Gastroenterology, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, China
| | - Xing-nan Ye
- Taizhou Hospital of Zhejiang Province, Shaoxing University, Linhai, China
| | - Li-ping Ye
- Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, China
- Key Laboratory of Minimally Invasive Techniques and Rapid Rehabilitation of Digestive System Tumor of Zhejiang Province, Taizhou Hospital Affiliated to Wenzhou Medical University, Linhai, China
- Department of Gastroenterology, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, China
- Institute of Digestive Disease, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, China
| | - Shao-wei Li
- Key Laboratory of Minimally Invasive Techniques and Rapid Rehabilitation of Digestive System Tumor of Zhejiang Province, Taizhou Hospital Affiliated to Wenzhou Medical University, Linhai, China
- Department of Gastroenterology, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, China
- Institute of Digestive Disease, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, China
| |
Collapse
|
10
|
Chen H, Yang BW, Qian L, Meng YS, Bai XH, Hong XW, He X, Jiang MJ, Yuan F, Du QW, Feng WW. Deep Learning Prediction of Ovarian Malignancy at US Compared with O-RADS and Expert Assessment. Radiology 2022; 304:106-113. [PMID: 35412367 DOI: 10.1148/radiol.211367] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Background Deep learning (DL) algorithms could improve the classification of ovarian tumors assessed with multimodal US. Purpose To develop DL algorithms for the automated classification of benign versus malignant ovarian tumors assessed with US and to compare algorithm performance to Ovarian-Adnexal Reporting and Data System (O-RADS) and subjective expert assessment for malignancy. Materials and Methods This retrospective study included consecutive women with ovarian tumors undergoing gray scale and color Doppler US from January 2019 to November 2019. Histopathologic analysis was the reference standard. The data set was divided into training (70%), validation (10%), and test (20%) sets. Algorithms modified from residual network (ResNet) with two fusion strategies (feature fusion [hereafter, DLfeature] or decision fusion [hereafter, DLdecision]) were developed. DL prediction of malignancy was compared with O-RADS risk categorization and expert assessment by area under the receiver operating characteristic curve (AUC) analysis in the test set. Results A total of 422 women (mean age, 46.4 years ± 14.8 [SD]) with 304 benign and 118 malignant tumors were included; there were 337 women in the training and validation data set and 85 women in the test data set. DLfeature had an AUC of 0.93 (95% CI: 0.85, 0.97) for classifying malignant from benign ovarian tumors, comparable with O-RADS (AUC, 0.92; 95% CI: 0.85, 0.97; P = .88) and expert assessment (AUC, 0.97; 95% CI: 0.91, 0.99; P = .07), and similar to DLdecision (AUC, 0.90; 95% CI: 0.82, 0.96; P = .29). DLdecision, DLfeature, O-RADS, and expert assessment achieved sensitivities of 92%, 92%, 92%, and 96%, respectively, and specificities of 80%, 85%, 89%, and 87%, respectively, for malignancy. Conclusion Deep learning algorithms developed by using multimodal US images may distinguish malignant from benign ovarian tumors with diagnostic performance comparable to expert subjective and Ovarian-Adnexal Reporting and Data System assessment. © RSNA, 2022 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Hui Chen
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Bo-Wen Yang
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Le Qian
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Yi-Shuang Meng
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Xiang-Hui Bai
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Xiao-Wei Hong
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Xin He
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Mei-Jiao Jiang
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Fei Yuan
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Qin-Wen Du
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Wei-Wei Feng
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| |
Collapse
|
11
|
Abstract
Artificial intelligence (AI) is a fascinating new technology that incorporates machine learning and neural networks to improve existing technology or create new ones. Potential applications of AI are introduced to aid in the fight against colorectal cancer (CRC). This includes how AI will affect the epidemiology of colorectal cancer and the new methods of mass information gathering like GeoAI, digital epidemiology and real-time information collection. Meanwhile, this review also examines existing tools for diagnosing disease like CT/MRI, endoscopes, genetics, and pathological assessments also benefitted greatly from implementation of deep learning. Finally, how treatment and treatment approaches to CRC can be enhanced when applying AI is under discussion. The power of AI regarding the therapeutic recommendation in colorectal cancer demonstrates much promise in clinical and translational field of oncology, which means better and personalized treatments for those in need.
Collapse
Affiliation(s)
- Chaoran Yu
- Department of General Surgery, Shanghai Ninth People’ Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025 People’s Republic of China
| | - Ernest Johann Helwig
- Tongji Medical College of Huazhong University of Science and Technology, Wuhan, 430030 People’s Republic of China
| |
Collapse
|
12
|
Qian Y. Exploration of machine algorithms based on deep learning model and feature extraction. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:7602-7618. [PMID: 34814265 DOI: 10.3934/mbe.2021376] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The study expects to solve the problems of insufficient labeling, high input dimension, and inconsistent task input distribution in traditional lifelong machine learning. A new deep learning model is proposed by combining feature representation with a deep learning algorithm. First, based on the theoretical basis of the deep learning model and feature extraction. The study analyzes several representative machine learning algorithms, and compares the performance of the optimized deep learning model with other algorithms in a practical application. By explaining the machine learning system, the study introduces two typical algorithms in machine learning, namely ELLA (Efficient lifelong learning algorithm) and HLLA (Hierarchical lifelong learning algorithm). Second, the flow of the genetic algorithm is described, and combined with mutual information feature extraction in a machine algorithm, to form a composite algorithm HLLA (Hierarchical lifelong learning algorithm). Finally, the deep learning model is optimized and a deep learning model based on the HLLA algorithm is constructed. When K = 1200, the classification error rate reaches 0.63%, which reflects the excellent performance of the unsupervised database algorithm based on this model. Adding the feature model to the updating iteration process of lifelong learning deepens the knowledge base ability of lifelong machine learning, which is of great value to reduce the number of labels required for subsequent model learning and improve the efficiency of lifelong learning.
Collapse
Affiliation(s)
- Yufeng Qian
- School of Science, Hubei University of Technology, Wuhan 430068, China
| |
Collapse
|
13
|
Transfer Learning Approach for Classification of Histopathology Whole Slide Images. SENSORS 2021; 21:s21165361. [PMID: 34450802 PMCID: PMC8401188 DOI: 10.3390/s21165361] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Revised: 08/06/2021] [Accepted: 08/07/2021] [Indexed: 02/07/2023]
Abstract
The classification of whole slide images (WSIs) provides physicians with an accurate analysis of diseases and also helps them to treat patients effectively. The classification can be linked to further detailed analysis and diagnosis. Deep learning (DL) has made significant advances in the medical industry, including the use of magnetic resonance imaging (MRI) scans, computerized tomography (CT) scans, and electrocardiograms (ECGs) to detect life-threatening diseases, including heart disease, cancer, and brain tumors. However, more advancement in the field of pathology is needed, but the main hurdle causing the slow progress is the shortage of large-labeled datasets of histopathology images to train the models. The Kimia Path24 dataset was particularly created for the classification and retrieval of histopathology images. It contains 23,916 histopathology patches with 24 tissue texture classes. A transfer learning-based framework is proposed and evaluated on two famous DL models, Inception-V3 and VGG-16. To improve the productivity of Inception-V3 and VGG-16, we used their pre-trained weights and concatenated these with an image vector, which is used as input for the training of the same architecture. Experiments show that the proposed innovation improves the accuracy of both famous models. The patch-to-scan accuracy of VGG-16 is improved from 0.65 to 0.77, and for the Inception-V3, it is improved from 0.74 to 0.79.
Collapse
|
14
|
Li Y, Zhou D, Liu TT, Shen XZ. Application of deep learning in image recognition and diagnosis of gastric cancer. Artif Intell Gastrointest Endosc 2021; 2:12-24. [DOI: 10.37126/aige.v2.i2.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/30/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
In recent years, artificial intelligence has been extensively applied in the diagnosis of gastric cancer based on medical imaging. In particular, using deep learning as one of the mainstream approaches in image processing has made remarkable progress. In this paper, we also provide a comprehensive literature survey using four electronic databases, PubMed, EMBASE, Web of Science, and Cochrane. The literature search is performed until November 2020. This article provides a summary of the existing algorithm of image recognition, reviews the available datasets used in gastric cancer diagnosis and the current trends in applications of deep learning theory in image recognition of gastric cancer. covers the theory of deep learning on endoscopic image recognition. We further evaluate the advantages and disadvantages of the current algorithms and summarize the characteristics of the existing image datasets, then combined with the latest progress in deep learning theory, and propose suggestions on the applications of optimization algorithms. Based on the existing research and application, the label, quantity, size, resolutions, and other aspects of the image dataset are also discussed. The future developments of this field are analyzed from two perspectives including algorithm optimization and data support, aiming to improve the diagnosis accuracy and reduce the risk of misdiagnosis.
Collapse
Affiliation(s)
- Yu Li
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| | - Da Zhou
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| | - Tao-Tao Liu
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| | - Xi-Zhong Shen
- Department of Gastroenterology and Hepatology, Zhongshan Hospital Affiliated to Fudan University, Shanghai 200032, China
| |
Collapse
|
15
|
Artificial Intelligence in Colorectal Cancer Diagnosis Using Clinical Data: Non-Invasive Approach. Diagnostics (Basel) 2021; 11:diagnostics11030514. [PMID: 33799452 PMCID: PMC8001232 DOI: 10.3390/diagnostics11030514] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Revised: 03/10/2021] [Accepted: 03/11/2021] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer is the third most common and second most lethal tumor globally, causing 900,000 deaths annually. In this research, a computer aided diagnosis system was designed that detects colorectal cancer, using an innovative dataset composing of both numeric (blood and urine analysis) and qualitative data (living environment of the patient, tumor position, T, N, M, Dukes classification, associated pathology, technical approach, complications, incidents, ultrasonography-dimensions as well as localization). The intelligent computer aided colorectal cancer diagnosis system was designed using different machine learning techniques, such as classification and shallow and deep neural networks. The maximum accuracy obtained from solving the binary classification problem with traditional machine learning algorithms was 77.8%. However, the regression problem solved with deep neural networks yielded with significantly better performance in terms of mean squared error minimization, reaching the value of 0.0000529.
Collapse
|
16
|
Attallah O, Sharkas M. GASTRO-CADx: a three stages framework for diagnosing gastrointestinal diseases. PeerJ Comput Sci 2021; 7:e423. [PMID: 33817058 PMCID: PMC7959662 DOI: 10.7717/peerj-cs.423] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 02/11/2021] [Indexed: 05/04/2023]
Abstract
Gastrointestinal (GI) diseases are common illnesses that affect the GI tract. Diagnosing these GI diseases is quite expensive, complicated, and challenging. A computer-aided diagnosis (CADx) system based on deep learning (DL) techniques could considerably lower the examination cost processes and increase the speed and quality of diagnosis. Therefore, this article proposes a CADx system called Gastro-CADx to classify several GI diseases using DL techniques. Gastro-CADx involves three progressive stages. Initially, four different CNNs are used as feature extractors to extract spatial features. Most of the related work based on DL approaches extracted spatial features only. However, in the following phase of Gastro-CADx, features extracted in the first stage are applied to the discrete wavelet transform (DWT) and the discrete cosine transform (DCT). DCT and DWT are used to extract temporal-frequency and spatial-frequency features. Additionally, a feature reduction procedure is performed in this stage. Finally, in the third stage of the Gastro-CADx, several combinations of features are fused in a concatenated manner to inspect the effect of feature combination on the output results of the CADx and select the best-fused feature set. Two datasets referred to as Dataset I and II are utilized to evaluate the performance of Gastro-CADx. Results indicated that Gastro-CADx has achieved an accuracy of 97.3% and 99.7% for Dataset I and II respectively. The results were compared with recent related works. The comparison showed that the proposed approach is capable of classifying GI diseases with higher accuracy compared to other work. Thus, it can be used to reduce medical complications, death-rates, in addition to the cost of treatment. It can also help gastroenterologists in producing more accurate diagnosis while lowering inspection time.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| | - Maha Sharkas
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|