1
|
Jhang JY, Tsai YC, Hsu TC, Huang CR, Cheng HC, Sheu BS. Gastric Section Correlation Network for Gastric Precancerous Lesion Diagnosis. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2023; 5:434-442. [PMID: 38899022 PMCID: PMC11186652 DOI: 10.1109/ojemb.2023.3277219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 03/24/2023] [Accepted: 05/10/2023] [Indexed: 06/21/2024] Open
Abstract
Goal: Diagnosing the corpus-predominant gastritis index (CGI) which is an early precancerous lesion in the stomach has been shown its effectiveness in identifying high gastric cancer risk patients for preventive healthcare. However, invasive biopsies and time-consuming pathological analysis are required for the CGI diagnosis. Methods: We propose a novel gastric section correlation network (GSCNet) for the CGI diagnosis from endoscopic images of three dominant gastric sections, the antrum, body and cardia. The proposed network consists of two dominant modules including the scaling feature fusion module and section correlation module. The front one aims to extract scaling fusion features which can effectively represent the mucosa under variant viewing angles and scale changes for each gastric section. The latter one aims to apply the medical prior knowledge with three section correlation losses to model the correlations of different gastric sections for the CGI diagnosis. Results: The proposed method outperforms competing deep learning methods and achieves high testing accuracy, sensitivity, and specificity of 0.957, 0.938 and 0.962, respectively. Conclusions: The proposed method is the first method to identify high gastric cancer risk patients with CGI from endoscopic images without invasive biopsies and time-consuming pathological analysis.
Collapse
Affiliation(s)
- Jyun-Yao Jhang
- Department of Computer Science and EngineeringNational Chung Hsing UniversityTaichung402Taiwan
| | - Yu-Ching Tsai
- Department of Internal MedicineTainan Hospital, Ministry of Health and WelfareTainan701Taiwan
- Department of Internal Medicine, National Cheng Kung University Hospital, College of MedicineNational Cheng Kung UniversityTainan701Taiwan
| | - Tzu-Chun Hsu
- Department of Computer Science and EngineeringNational Chung Hsing UniversityTaichung402Taiwan
| | - Chun-Rong Huang
- Cross College Elite Program, and Academy of Innovative Semiconductor and Sustainable ManufacturingNational Cheng Kung UniversityTainan701Taiwan
- Department of Computer Science and EngineeringNational Chung Hsing UniversityTaichung402Taiwan
| | - Hsiu-Chi Cheng
- Department of Internal Medicine, Institute of Clinical Medicine and Molecular MedicineNational Cheng Kung UniversityTainan701Taiwan
- Department of Internal MedicineTainan Hospital, Ministry of Health and WelfareTainan701Taiwan
| | - Bor-Shyang Sheu
- Institute of Clinical Medicine and Department of Internal Medicine, National Cheng Kung University HospitalNational Cheng Kung UniversityTainan701Taiwan
| |
Collapse
|
2
|
Lin TH, Jhang JY, Huang CR, Tsai YC, Cheng HC, Sheu BS. Deep Ensemble Feature Network for Gastric Section Classification. IEEE J Biomed Health Inform 2021; 25:77-87. [PMID: 32750926 DOI: 10.1109/jbhi.2020.2999731] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In this paper, we propose a novel deep ensemble feature (DEF) network to classify gastric sections from endoscopic images. Different from recent deep ensemble learning methods, which need to train deep features and classifiers individually to obtain fused classification results, the proposed method can simultaneously learn the deep ensemble feature from arbitrary number of convolutional neural networks (CNNs) and the decision classifier in an end-to-end trainable manner. It comprises two sub networks, the ensemble feature network and the decision network. The former sub network learns the deep ensemble feature from multiple CNNs to represent endoscopic images. The latter sub network learns to obtain the classification labels by using the deep ensemble feature. Both sub networks are optimized based on the proposed ensemble feature loss and the decision loss which guide the learning of deep features and decisions. As shown in the experimental results, the proposed method outperforms the state-of-the-art deep learning, ensemble learning, and deep ensemble learning methods.
Collapse
|
3
|
Han Z, Wei B, Hong Y, Li T, Cong J, Zhu X, Wei H, Zhang W. Accurate Screening of COVID-19 Using Attention-Based Deep 3D Multiple Instance Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2584-2594. [PMID: 32730211 DOI: 10.1109/tmi.2020.2996256] [Citation(s) in RCA: 141] [Impact Index Per Article: 28.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Automated Screening of COVID-19 from chest CT is of emergency and importance during the outbreak of SARS-CoV-2 worldwide in 2020. However, accurate screening of COVID-19 is still a massive challenge due to the spatial complexity of 3D volumes, the labeling difficulty of infection areas, and the slight discrepancy between COVID-19 and other viral pneumonia in chest CT. While a few pioneering works have made significant progress, they are either demanding manual annotations of infection areas or lack of interpretability. In this paper, we report our attempt towards achieving highly accurate and interpretable screening of COVID-19 from chest CT with weak labels. We propose an attention-based deep 3D multiple instance learning (AD3D-MIL) where a patient-level label is assigned to a 3D chest CT that is viewed as a bag of instances. AD3D-MIL can semantically generate deep 3D instances following the possible infection area. AD3D-MIL further applies an attention-based pooling approach to 3D instances to provide insight into each instance's contribution to the bag label. AD3D-MIL finally learns Bernoulli distributions of the bag-level labels for more accessible learning. We collected 460 chest CT examples: 230 CT examples from 79 patients with COVID-19, 100 CT examples from 100 patients with common pneumonia, and 130 CT examples from 130 people without pneumonia. A series of empirical studies show that our algorithm achieves an overall accuracy of 97.9%, AUC of 99.0%, and Cohen kappa score of 95.7%. These advantages endow our algorithm as an efficient assisted tool in the screening of COVID-19.
Collapse
|
4
|
Li M, Wang H, Shang Z, Yang Z, Zhang Y, Wan H. Ependymoma and pilocytic astrocytoma: Differentiation using radiomics approach based on machine learning. J Clin Neurosci 2020; 78:175-180. [PMID: 32336636 DOI: 10.1016/j.jocn.2020.04.080] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Accepted: 04/13/2020] [Indexed: 01/14/2023]
Abstract
Mandatory accurate and specific diagnosis demands have brought about increased challenges for radiologists in pediatric posterior fossa tumor prediction and prognosis. With the development of high-performance computing and machine learning technologies, radiomics provides increasing opportunities for clinical decision-making. Several studies have applied radiomics as a decision support tool in intracranial tumors differentiation. Here we seek to achieve preoperative differentiation between ependymoma (EP) and pilocytic astrocytoma (PA) using radiomics analysis method based on machine learning. A total of 135 Magnetic Resonance Imaging (MRI) slices are divided into training sets and validation sets. Three kinds of radiomics features, including Gabor transform, texture and wavelet transform based ones are used to obtain 300 multimodal features. Kruskal-Wallis test score (KWT) and support vector machines (SVM) are applied for feature selection and tumor differentiation. The performance is investigated via accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) analysis. Results show that the accuracy, sensitivity, specificity, and AUC of the selected feature set are 0.8775, 0.9292, 0.8000, and 0.8646 respectively, having no significantdifferencescomparedwiththe overall feature set. For different types of features, texture features yield the best differentiation performance and the significance analysis results are consistent with this. Our study demonstrates texture features perform better than the other features. The radiomics approach based on machine learning is efficient for pediatric posterior fossa tumors differentiation and could enhance the application of radiomics methods for assisted clinical diagnosis.
Collapse
Affiliation(s)
- Mengmeng Li
- School of Electrical Engineering, Zhengzhou University, Zhengzhou 450001, China; Industrial Technology Research Institute, Zhengzhou University, Zhengzhou 450001, China; Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou University, Zhengzhou 450001, China
| | - Haofeng Wang
- School of Electrical Engineering, Zhengzhou University, Zhengzhou 450001, China; Industrial Technology Research Institute, Zhengzhou University, Zhengzhou 450001, China; Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou University, Zhengzhou 450001, China
| | - Zhigang Shang
- School of Electrical Engineering, Zhengzhou University, Zhengzhou 450001, China; Industrial Technology Research Institute, Zhengzhou University, Zhengzhou 450001, China; Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou University, Zhengzhou 450001, China.
| | - Zhongliang Yang
- School of Electrical Engineering, Zhengzhou University, Zhengzhou 450001, China; Industrial Technology Research Institute, Zhengzhou University, Zhengzhou 450001, China; Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou University, Zhengzhou 450001, China
| | - Yong Zhang
- Magnetic Resonance Department, the First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Hong Wan
- School of Electrical Engineering, Zhengzhou University, Zhengzhou 450001, China; Industrial Technology Research Institute, Zhengzhou University, Zhengzhou 450001, China; Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou University, Zhengzhou 450001, China.
| |
Collapse
|
5
|
Wang S, Wang Q, Shao Y, Qu L, Lian C, Lian J, Shen D. Iterative Label Denoising Network: Segmenting Male Pelvic Organs in CT From 3D Bounding Box Annotations. IEEE Trans Biomed Eng 2020; 67:2710-2720. [PMID: 31995472 DOI: 10.1109/tbme.2020.2969608] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Obtaining accurate segmentation of the prostate and nearby organs at risk (e.g., bladder and rectum) in CT images is critical for radiotherapy of prostate cancer. Currently, the leading automatic segmentation algorithms are based on Fully Convolutional Networks (FCNs), which achieve remarkable performance but usually need large-scale datasets with high-quality voxel-wise annotations for full supervision of the training. Unfortunately, such annotations are difficult to acquire, which becomes a bottleneck to build accurate segmentation models in real clinical applications. In this paper, we propose a novel weakly supervised segmentation approach that only needs 3D bounding box annotations covering the organs of interest to start the training. Obviously, the bounding box includes many non-organ voxels that carry noisy labels to mislead the segmentation model. To this end, we propose the label denoising module and embed it into the iterative training scheme of the label denoising network (LDnet) for segmentation. The labels of the training voxels are predicted by the tentative LDnet, while the label denoising module identifies the voxels with unreliable labels. As only the good training voxels are preserved, the iteratively re-trained LDnet can refine its segmentation capability gradually. Our results are remarkable, i.e., reaching ∼ 94% (prostate), ∼ 91% (bladder), and ∼ 86% (rectum) of the Dice Similarity Coefficients (DSCs), compared to the case of fully supervised learning upon high-quality voxel-wise annotations and also superior to several state-of-the-art approaches. To our best knowledge, this is the first work to achieve voxel-wise segmentation in CT images from simple 3D bounding box annotations, which can greatly reduce many labeling efforts and meet the demands of the practical clinical applications.
Collapse
|
6
|
Jani KK, Srivastava S, Srivastava R. Computer aided diagnosis system for ulcer detection in capsule endoscopy using optimized feature set. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-182883] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Kuntesh K. Jani
- Department of Computer Science and Engineering, Indian Institute of Technology (BHU), Varanasi, Uttar Pradesh, India
| | - Subodh Srivastava
- Department of Electronics and Communication Engineering, National Institute of Technology, Patna, Bihar, India
| | - Rajeev Srivastava
- Department of Computer Science and Engineering, Indian Institute of Technology (BHU), Varanasi, Uttar Pradesh, India
| |
Collapse
|
7
|
Diamantis DE, Iakovidis DK, Koulaouzidis A. Look-behind fully convolutional neural network for computer-aided endoscopy. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2018.12.005] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
8
|
Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med Phys 2019; 46:e1-e36. [PMID: 30367497 PMCID: PMC9560030 DOI: 10.1002/mp.13264] [Citation(s) in RCA: 398] [Impact Index Per Article: 66.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 09/18/2018] [Accepted: 10/09/2018] [Indexed: 12/15/2022] Open
Abstract
The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.
Collapse
Affiliation(s)
- Berkman Sahiner
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Aria Pezeshk
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | | | - Xiaosong Wang
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | - Karen Drukker
- Department of RadiologyUniversity of ChicagoChicagoIL60637USA
| | - Kenny H. Cha
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Ronald M. Summers
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | | |
Collapse
|
9
|
Iakovidis DK, Georgakopoulos SV, Vasilakakis M, Koulaouzidis A, Plagianakos VP. Detecting and Locating Gastrointestinal Anomalies Using Deep Learning and Iterative Cluster Unification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2196-2210. [PMID: 29994763 DOI: 10.1109/tmi.2018.2837002] [Citation(s) in RCA: 81] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
This paper proposes a novel methodology for automatic detection and localization of gastrointestinal (GI) anomalies in endoscopic video frame sequences. Training is performed with weakly annotated images, using only image-level, semantic labels instead of detailed, and pixel-level annotations. This makes it a cost-effective approach for the analysis of large videoendoscopy repositories. Other advantages of the proposed methodology include its capability to suggest possible locations of GI anomalies within the video frames, and its generality, in the sense that abnormal frame detection is based on automatically derived image features. It is implemented in three phases: 1) it classifies the video frames into abnormal or normal using a weakly supervised convolutional neural network (WCNN) architecture; 2) detects salient points from deeper WCNN layers, using a deep saliency detection algorithm; and 3) localizes GI anomalies using an iterative cluster unification (ICU) algorithm. ICU is based on a pointwise cross-feature-map (PCFM) descriptor extracted locally from the detected salient points using information derived from the WCNN. Results, from extensive experimentation using publicly available collections of gastrointestinal endoscopy video frames, are presented. The data sets used include a variety of GI anomalies. Both anomaly detection and localization performance achieved, in terms of the area under receiver operating characteristic (AUC), were >80%. The highest AUC for anomaly detection was obtained on conventional gastroscopy images, reaching 96%, and the highest AUC for anomaly localization was obtained on wireless capsule endoscopy images, reaching 88%.
Collapse
|
10
|
Weakly supervised multilabel classification for semantic interpretation of endoscopy video frames. EVOLVING SYSTEMS 2018. [DOI: 10.1007/s12530-018-9236-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
11
|
A novel summary report of colonoscopy: timeline visualization providing meaningful colonoscopy video information. Int J Colorectal Dis 2018. [PMID: 29520455 DOI: 10.1007/s00384-018-2980-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
PURPOSE The colonoscopy adenoma detection rate depends largely on physician experience and skill, and overlooked colorectal adenomas could develop into cancer. This study assessed a system that detects polyps and summarizes meaningful information from colonoscopy videos. METHODS One hundred thirteen consecutive patients had colonoscopy videos prospectively recorded at the Seoul National University Hospital. Informative video frames were extracted using a MATLAB support vector machine (SVM) model and classified as bleeding, polypectomy, tool, residue, thin wrinkle, folded wrinkle, or common. Thin wrinkle, folded wrinkle, and common frames were reanalyzed using SVM for polyp detection. The SVM model was applied hierarchically for effective classification and optimization of the SVM. RESULTS The mean classification accuracy according to type was over 93%; sensitivity was over 87%. The mean sensitivity for polyp detection was 82.1%, and the positive predicted value (PPV) was 39.3%. Polyps detected using the system were larger (6.3 ± 6.4 vs. 4.9 ± 2.5 mm; P = 0.003) with a more pedunculated morphology (Yamada type III, 10.2 vs. 0%; P < 0.001; Yamada type IV, 2.8 vs. 0%; P < 0.001) than polyps missed by the system. There were no statistically significant differences in polyp distribution or histology between the groups. Informative frames and suspected polyps were presented on a timeline. This summary was evaluated using the system usability scale questionnaire; 89.3% of participants expressed positive opinions. CONCLUSIONS We developed and verified a system to extract meaningful information from colonoscopy videos. Although further improvement and validation of the system is needed, the proposed system is useful for physicians and patients.
Collapse
|
12
|
Weakly-Supervised Lesion Detection in Video Capsule Endoscopy Based on a Bag-of-Colour Features Model. COMPUTER-ASSISTED AND ROBOTIC ENDOSCOPY 2017. [DOI: 10.1007/978-3-319-54057-3_9] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|