1
|
Chung SL, Cheng CT, Liao CH, Chung IF. Patch-based feature mapping with generative adversarial networks for auxiliary hip fracture detection. Comput Biol Med 2025; 186:109627. [PMID: 39793347 DOI: 10.1016/j.compbiomed.2024.109627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 12/23/2024] [Accepted: 12/23/2024] [Indexed: 01/13/2025]
Abstract
BACKGROUND Hip fractures are a significant public health issue, particularly among the elderly population. Pelvic radiographs (PXRs) play a crucial role in diagnosing hip fractures and are commonly used for their evaluation. Previous research has demonstrated promising performance in classification models for hip fracture detection. However, these models sometimes focus on the images' non-fracture regions, reducing their explainability. This study applies weakly supervised learning techniques to address this issue and improve the model's focus on the fracture region. Additionally, we introduce a method to quantitatively evaluate the model's focus on the region of interest (ROI). METHODS We propose a new auxiliary module called the patch-auxiliary generative adversarial network (PAGAN) for weakly supervised learning tasks. PAGAN can be integrated with any state-of-the-art (SOTA) classification model, such as EfficientNetB0, ResNet50, and DenseNet121, to enhance hip fracture detection. This training strategy incorporates global information (the entire PXR image) and local information (the hip region patch) for more effective learning. Furthermore, we employ GradCAM to generate attention heatmaps, highlighting the focus areas within the classification model. The intersection over union (IOU) and dice coefficient (Dise) are then computed between the attention heatmap and the fracture area, enabling a quantitative assessment of the model's explainability. RESULTS AND CONCLUSIONS Incorporating PAGAN improved the performance of the classification models. The accuracy of EfficientNetB0 increased from 93.61 % to 95.97 %, ResNet50 improved from 90.66 % to 94.89 %, and DenseNet121 saw an increase from 93.51 % to 94.49 %. Regarding model explainability, the integration of PAGAN into classification models led to a more pronounced attention to ROI. The average IOU improved from 0.32 to 0.54 for EfficientNetB0, from 0.28 to 0.40 for ResNet50, and from 0.37 to 0.51 for DenseNet121. These results indicate that PAGAN improves hip fracture classification performance and substantially enhances the model's focus on the fracture region, thereby increasing its explainability.
Collapse
Affiliation(s)
- Shang-Lin Chung
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan, Taiwan
| | - I-Fang Chung
- Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan.
| |
Collapse
|
2
|
Fujii Y, Uchida D, Sato R, Obata T, Akihiro M, Miyamoto K, Morimoto K, Terasawa H, Yamazaki T, Matsumoto K, Horiguchi S, Tsutsumi K, Kato H, Inoue H, Cho T, Tanimoto T, Ohto A, Kawahara Y, Otsuka M. Effectiveness of data-augmentation on deep learning in evaluating rapid on-site cytopathology at endoscopic ultrasound-guided fine needle aspiration. Sci Rep 2024; 14:22441. [PMID: 39341885 PMCID: PMC11439075 DOI: 10.1038/s41598-024-72312-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 09/05/2024] [Indexed: 10/01/2024] Open
Abstract
Rapid on-site cytopathology evaluation (ROSE) has been considered an effective method to increase the diagnostic ability of endoscopic ultrasound-guided fine needle aspiration (EUS-FNA); however, ROSE is unavailable in most institutes worldwide due to the shortage of cytopathologists. To overcome this situation, we created an artificial intelligence (AI)-based system (the ROSE-AI system), which was trained with the augmented data to evaluate the slide images acquired by EUS-FNA. This study aimed to clarify the effects of such data-augmentation on establishing an effective ROSE-AI system by comparing the efficacy of various data-augmentation techniques. The ROSE-AI system was trained with increased data obtained by the various data-augmentation techniques, including geometric transformation, color space transformation, and kernel filtering. By performing five-fold cross-validation, we compared the efficacy of each data-augmentation technique on the increasing diagnostic abilities of the ROSE-AI system. We collected 4059 divided EUS-FNA slide images from 36 patients with pancreatic cancer and nine patients with non-pancreatic cancer. The diagnostic ability of the ROSE-AI system without data augmentation had a sensitivity, specificity, and accuracy of 87.5%, 79.7%, and 83.7%, respectively. While, some data-augmentation techniques decreased diagnostic ability, the ROSE-AI system trained only with the augmented data using the geometric transformation technique had the highest diagnostic accuracy (88.2%). We successfully developed a prototype ROSE-AI system with high diagnostic ability. Each data-augmentation technique may have various compatibilities with AI-mediated diagnostics, and the geometric transformation was the most effective for the ROSE-AI system.
Collapse
Affiliation(s)
- Yuki Fujii
- Department of Gastroenterology and Hepatology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Science, 2-5-1, Shikata-Cho, Kita-Ku, Okayama, Okayama, Japan.
| | - Daisuke Uchida
- Department of Gastroenterology and Hepatology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Science, 2-5-1, Shikata-Cho, Kita-Ku, Okayama, Okayama, Japan
| | - Ryosuke Sato
- Department of Gastroenterology and Hepatology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Science, 2-5-1, Shikata-Cho, Kita-Ku, Okayama, Okayama, Japan
| | - Taisuke Obata
- Department of Gastroenterology and Hepatology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Science, 2-5-1, Shikata-Cho, Kita-Ku, Okayama, Okayama, Japan
| | - Matsumi Akihiro
- Department of Gastroenterology and Hepatology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Science, 2-5-1, Shikata-Cho, Kita-Ku, Okayama, Okayama, Japan
| | - Kazuya Miyamoto
- Department of Gastroenterology and Hepatology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Science, 2-5-1, Shikata-Cho, Kita-Ku, Okayama, Okayama, Japan
| | - Kosaku Morimoto
- Department of Gastroenterology and Hepatology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Science, 2-5-1, Shikata-Cho, Kita-Ku, Okayama, Okayama, Japan
| | - Hiroyuki Terasawa
- Department of Gastroenterology and Hepatology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Science, 2-5-1, Shikata-Cho, Kita-Ku, Okayama, Okayama, Japan
| | - Tatsuhiro Yamazaki
- Department of Gastroenterology and Hepatology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Science, 2-5-1, Shikata-Cho, Kita-Ku, Okayama, Okayama, Japan
| | - Kazuyuki Matsumoto
- Department of Gastroenterology and Hepatology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Science, 2-5-1, Shikata-Cho, Kita-Ku, Okayama, Okayama, Japan
| | - Shigeru Horiguchi
- Department of Gastroenterology and Hepatology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Science, 2-5-1, Shikata-Cho, Kita-Ku, Okayama, Okayama, Japan
| | - Koichiro Tsutsumi
- Department of Gastroenterology and Hepatology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Science, 2-5-1, Shikata-Cho, Kita-Ku, Okayama, Okayama, Japan
| | - Hironari Kato
- Department of Gastroenterology and Hepatology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Science, 2-5-1, Shikata-Cho, Kita-Ku, Okayama, Okayama, Japan
| | - Hirofumi Inoue
- Department of Pathology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Science, Okayama, Japan
| | - Ten Cho
- Business Strategy Division, Ryobi Systems Co., Ltd., Okayama, Japan
| | | | - Akimitsu Ohto
- Business Strategy Division, Ryobi Systems Co., Ltd., Okayama, Japan
| | - Yoshiro Kawahara
- Department of Practical Gastrointestinal Endoscopy, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Science, Okayama, Japan
| | - Motoyuki Otsuka
- Department of Gastroenterology and Hepatology, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Science, 2-5-1, Shikata-Cho, Kita-Ku, Okayama, Okayama, Japan
| |
Collapse
|
3
|
Moen S, Vuik FER, Kuipers EJ, Spaander MCW. Artificial Intelligence in Colon Capsule Endoscopy—A Systematic Review. Diagnostics (Basel) 2022; 12:diagnostics12081994. [PMID: 36010345 PMCID: PMC9407289 DOI: 10.3390/diagnostics12081994] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/05/2022] [Accepted: 08/10/2022] [Indexed: 12/17/2022] Open
Abstract
Background and aims: The applicability of colon capsule endoscopy in daily practice is limited by the accompanying labor-intensive reviewing time and the risk of inter-observer variability. Automated reviewing of colon capsule endoscopy images using artificial intelligence could be timesaving while providing an objective and reproducible outcome. This systematic review aims to provide an overview of the available literature on artificial intelligence for reviewing colonic mucosa by colon capsule endoscopy and to assess the necessary action points for its use in clinical practice. Methods: A systematic literature search of literature published up to January 2022 was conducted using Embase, Web of Science, OVID MEDLINE and Cochrane CENTRAL. Studies reporting on the use of artificial intelligence to review second-generation colon capsule endoscopy colonic images were included. Results: 1017 studies were evaluated for eligibility, of which nine were included. Two studies reported on computed bowel cleansing assessment, five studies reported on computed polyp or colorectal neoplasia detection and two studies reported on other implications. Overall, the sensitivity of the proposed artificial intelligence models were 86.5–95.5% for bowel cleansing and 47.4–98.1% for the detection of polyps and colorectal neoplasia. Two studies performed per-lesion analysis, in addition to per-frame analysis, which improved the sensitivity of polyp or colorectal neoplasia detection to 81.3–98.1%. By applying a convolutional neural network, the highest sensitivity of 98.1% for polyp detection was found. Conclusion: The use of artificial intelligence for reviewing second-generation colon capsule endoscopy images is promising. The highest sensitivity of 98.1% for polyp detection was achieved by deep learning with a convolutional neural network. Convolutional neural network algorithms should be optimized and tested with more data, possibly requiring the set-up of a large international colon capsule endoscopy database. Finally, the accuracy of the optimized convolutional neural network models need to be confirmed in a prospective setting.
Collapse
|