1
|
Murphy PM. Towards an EKG for SBO: A Neural Network for Detection and Characterization of Bowel Obstruction on CT. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01023-y. [PMID: 38388866 DOI: 10.1007/s10278-024-01023-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 01/07/2024] [Accepted: 01/09/2024] [Indexed: 02/24/2024]
Abstract
A neural network was developed to detect and characterize bowel obstruction, a common cause of acute abdominal pain. In this retrospective study, 202 CT scans of 165 patients with bowel obstruction from March to June 2022 were included and partitioned into training and test data sets. A multi-channel neural network was trained to segment the gastrointestinal tract, and to predict the diameter and the longitudinal position ("longitude") along the gastrointestinal tract using a novel embedding. Its performance was compared to manual segmentations using the Dice score, and to manual measurements of the diameter and longitude using intraclass correlation coefficients (ICC). ROC curves as well as sensitivity and specificity were calculated for diameters above a clinical threshold for obstruction, and for longitudes corresponding to small bowel. In the test data set, Dice score for segmentation of the gastrointestinal tract was 78 ± 8%. ICC between measured and predicted diameters was 0.72, indicating moderate agreement. ICC between measured and predicted longitude was 0.85, indicating good agreement. AUROC was 0.90 for detection of dilated bowel, and was 0.95 and 0.90 for differentiation of the proximal and distal gastrointestinal tract respectively. Overall sensitivity and specificity for dilated small bowel were 0.83 and 0.90. Since obstruction is diagnosed based on the diameter and longitude of the bowel, this neural network and embedding may enable detection and characterization of this important disease on CT.
Collapse
Affiliation(s)
- Paul M Murphy
- University of California-San Diego, UCSD Radiology, 9500 Gilman Dr, La Jolla, 200 W Arbor Dr, San Diego, CA, 92103, USA.
| |
Collapse
|
2
|
Murphy PM. Visual Image Annotation for Bowel Obstruction: Repeatability and Agreement with Manual Annotation and Neural Networks. J Digit Imaging 2023; 36:2179-2193. [PMID: 37278918 PMCID: PMC10502000 DOI: 10.1007/s10278-023-00825-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 03/21/2023] [Accepted: 03/29/2023] [Indexed: 06/07/2023] Open
Abstract
Bowel obstruction is a common cause of acute abdominal pain. The development of algorithms for automated detection and characterization of bowel obstruction on CT has been limited by the effort required for manual annotation. Visual image annotation with an eye tracking device may mitigate that limitation. The purpose of this study is to assess the agreement between visual and manual annotations for bowel segmentation and diameter measurement, and to assess agreement with convolutional neural networks (CNNs) trained using that data. Sixty CT scans of 50 patients with bowel obstruction from March to June 2022 were retrospectively included and partitioned into training and test data sets. An eye tracking device was used to record 3-dimensional coordinates within the scans, while a radiologist cast their gaze at the centerline of the bowel, and adjusted the size of a superimposed ROI to approximate the diameter of the bowel. For each scan, 59.4 ± 15.1 segments, 847.9 ± 228.1 gaze locations, and 5.8 ± 1.2 m of bowel were recorded. 2d and 3d CNNs were trained using this data to predict bowel segmentation and diameter maps from the CT scans. For comparisons between two repetitions of visual annotation, CNN predictions, and manual annotations, Dice scores for bowel segmentation ranged from 0.69 ± 0.17 to 0.81 ± 0.04 and intraclass correlations [95% CI] for diameter measurement ranged from 0.672 [0.490-0.782] to 0.940 [0.933-0.947]. Thus, visual image annotation is a promising technique for training CNNs to perform bowel segmentation and diameter measurement in CT scans of patients with bowel obstruction.
Collapse
Affiliation(s)
- Paul M Murphy
- University of California-San Diego, 9500 Gilman Dr, 92093, La Jolla, CA, USA.
- UCSD Radiology, 200 W Arbor Dr, 92103, San Diego, CA, USA.
| |
Collapse
|
3
|
An Extra Set of Intelligent Eyes: Application of Artificial Intelligence in Imaging of Abdominopelvic Pathologies in Emergency Radiology. Diagnostics (Basel) 2022; 12:diagnostics12061351. [PMID: 35741161 PMCID: PMC9221728 DOI: 10.3390/diagnostics12061351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 05/19/2022] [Accepted: 05/26/2022] [Indexed: 11/25/2022] Open
Abstract
Imaging in the emergent setting carries high stakes. With increased demand for dedicated on-site service, emergency radiologists face increasingly large image volumes that require rapid turnaround times. However, novel artificial intelligence (AI) algorithms may assist trauma and emergency radiologists with efficient and accurate medical image analysis, providing an opportunity to augment human decision making, including outcome prediction and treatment planning. While traditional radiology practice involves visual assessment of medical images for detection and characterization of pathologies, AI algorithms can automatically identify subtle disease states and provide quantitative characterization of disease severity based on morphologic image details, such as geometry and fluid flow. Taken together, the benefits provided by implementing AI in radiology have the potential to improve workflow efficiency, engender faster turnaround results for complex cases, and reduce heavy workloads. Although analysis of AI applications within abdominopelvic imaging has primarily focused on oncologic detection, localization, and treatment response, several promising algorithms have been developed for use in the emergency setting. This article aims to establish a general understanding of the AI algorithms used in emergent image-based tasks and to discuss the challenges associated with the implementation of AI into the clinical workflow.
Collapse
|
4
|
Yi PH, Kim TK, Lin CT. Comparison of radiologist versus natural language processing-based image annotations for deep learning system for tuberculosis screening on chest radiographs. Clin Imaging 2022; 87:34-37. [PMID: 35483162 DOI: 10.1016/j.clinimag.2022.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 04/12/2022] [Accepted: 04/15/2022] [Indexed: 11/03/2022]
Abstract
Although natural language processing (NLP) can rapidly extract disease labels from radiology reports to create datasets for deep learning models, this may be less accurate than having radiologists manually review the images. In this study, we compared agreement between natural language processing (NLP) and radiologist-curated labels for possible tuberculosis (TB) on chest radiographs (CXR) and evaluated the performance of deep convolutional neural networks (DCNN) trained to identify TB using the preceding two sets of labels. We collected 10,951 CXRs from the NIH ChestX-ray14 dataset and labeled them as positive or negative for possible TB based on two methods: 1) NLP-derived disease labels and 2) radiologist-review of images. These images were used to train DCNNs on varying dataset sizes for possible TB and tested on an external dataset of 800 CXRs. Area under the ROC curve (AUC) was used to evaluate DCNNs. There was poor agreement between NLP and radiologist-curated labels for potential TB (Kappa coefficient 0.34). DCNNs trained using radiologist-curated labels had higher performance than the algorithm trained using the NLP-labels, regardless of the number of images used for training. The best-performing DCNN had an AUC of 0.88, which was trained on 10,951 images using the radiologist-annotated sets. DCNNs trained on CXRs labeled by a radiologist consistently outperformed those trained on the same CXRs labeled by NLP, highlighting the benefit of radiologists' determining groundtruth for machine learning dataset curation.
Collapse
Affiliation(s)
- Paul H Yi
- University of Maryland Medical Intelligent Imaging (UM2ii) Center, Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, United States of America; Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States of America.
| | - Tae Kyung Kim
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States of America
| | - Cheng Ting Lin
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States of America
| |
Collapse
|
5
|
Vanderbecq Q, Ardon R, De Reviers A, Ruppli C, Dallongeville A, Boulay-Coletta I, D’Assignies G, Zins M. Adhesion-related small bowel obstruction: deep learning for automatic transition-zone detection by CT. Insights Imaging 2022; 13:13. [PMID: 35072813 PMCID: PMC8787000 DOI: 10.1186/s13244-021-01150-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Accepted: 12/23/2021] [Indexed: 11/10/2022] Open
Abstract
Background To train a machine-learning model to locate the transition zone (TZ) of adhesion-related small bowel obstruction (SBO) on CT scans. Materials and methods We used 562 CTs performed in 2005–2018 in 404 patients with adhesion-related SBO. Annotation of the TZs was performed by experienced radiologists and trained residents using bounding boxes. Preprocessing involved using a pretrained model to extract the abdominopelvic region. We modeled TZ localization as a binary classification problem by splitting the abdominopelvic region into 125 patches. We then trained a neural network model to classify each patch as containing or not containing a TZ. We coupled this with a trained probabilistic estimation of presence of a TZ in each patch. The models were first evaluated by computing the area under the receiver operating characteristics curve (AUROC). Then, to assess the clinical benefit, we measured the proportion of total abdominopelvic volume classified as containing a TZ for several different false-negative rates. Results The probability of containing a TZ was highest for the hypogastric region (56.9%). The coupled classification network and probability mapping produced an AUROC of 0.93. For a 15% proportion of volume classified as containing TZs, the probability of highlighted patches containing a TZ was 92%.
Conclusion Modeling TZ localization by coupling convolutional neural network classification and probabilistic localization estimation shows the way to a possible automatic TZ detection, a complex radiological task with a major clinical impact. Supplementary Information The online version contains supplementary material available at 10.1186/s13244-021-01150-y.
Collapse
|
6
|
Litvin A, Korenev S, Rumovskaya S, Sartelli M, Baiocchi G, Biffl WL, Coccolini F, Di Saverio S, Kelly MD, Kluger Y, Leppäniemi A, Sugrue M, Catena F. WSES project on decision support systems based on artificial neural networks in emergency surgery. World J Emerg Surg 2021; 16:50. [PMID: 34565420 PMCID: PMC8474926 DOI: 10.1186/s13017-021-00394-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 09/13/2021] [Indexed: 12/11/2022] Open
Abstract
The article is a scoping review of the literature on the use of decision support systems based on artificial neural networks in emergency surgery. The authors present modern literature data on the effectiveness of artificial neural networks for predicting, diagnosing and treating abdominal emergency conditions: acute appendicitis, acute pancreatitis, acute cholecystitis, perforated gastric or duodenal ulcer, acute intestinal obstruction, and strangulated hernia. The intelligent systems developed at present allow a surgeon in an emergency setting, not only to check his own diagnostic and prognostic assumptions, but also to use artificial intelligence in complex urgent clinical cases. The authors summarize the main limitations for the implementation of artificial neural networks in surgery and medicine in general. These limitations are the lack of transparency in the decision-making process; insufficient quality educational medical data; lack of qualified personnel; high cost of projects; and the complexity of secure storage of medical information data. The development and implementation of decision support systems based on artificial neural networks is a promising direction for improving the forecasting, diagnosis and treatment of emergency surgical diseases and their complications.
Collapse
Affiliation(s)
- Andrey Litvin
- Department of Surgical Disciplines, Immanuel Kant Baltic Federal University, Kaliningrad, Russia.
| | - Sergey Korenev
- Department of Surgical Disciplines, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| | - Sophiya Rumovskaya
- Kaliningrad Branch of Federal Research Center "Computer Science and Control" of Russian Academy of Sciences, Kaliningrad, Russia
| | | | - Gianluca Baiocchi
- Surgical Clinic, Department of Experimental and Clinical Sciences, University of Brescia, Brescia, Italy
| | - Walter L Biffl
- Division of Trauma and Acute Care Surgery, Scripps Memorial Hospital La Jolla, La Jolla, CA, USA
| | - Federico Coccolini
- General, Emergency and Trauma Surgery Department, Pisa University Hospital, Pisa, Italy
| | - Salomone Di Saverio
- Department of Surgery, Cambridge University Hospital, NHS Foundation Trust, Cambridge, UK
| | | | - Yoram Kluger
- Department of General Surgery, Rambam Healthcare Campus, Haifa, Israel
| | - Ari Leppäniemi
- Department of Gastrointestinal Surgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Michael Sugrue
- Donegal Clinical Research Academy, Letterkenny University Hospital, Donegal, Ireland
| | - Fausto Catena
- Department of Emergency and Trauma Surgery of the University Hospital of Parma, Parma, Italy
| |
Collapse
|
7
|
Yang Y, Li YX, Yao RQ, Du XH, Ren C. Artificial intelligence in small intestinal diseases: Application and prospects. World J Gastroenterol 2021; 27:3734-3747. [PMID: 34321840 PMCID: PMC8291013 DOI: 10.3748/wjg.v27.i25.3734] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 04/09/2021] [Accepted: 05/08/2021] [Indexed: 02/06/2023] Open
Abstract
The small intestine is located in the middle of the gastrointestinal tract, so small intestinal diseases are more difficult to diagnose than other gastrointestinal diseases. However, with the extensive application of artificial intelligence in the field of small intestinal diseases, with its efficient learning capacities and computational power, artificial intelligence plays an important role in the auxiliary diagnosis and prognosis prediction based on the capsule endoscopy and other examination methods, which improves the accuracy of diagnosis and prediction and reduces the workload of doctors. In this review, a comprehensive retrieval was performed on articles published up to October 2020 from PubMed and other databases. Thereby the application status of artificial intelligence in small intestinal diseases was systematically introduced, and the challenges and prospects in this field were also analyzed.
Collapse
Affiliation(s)
- Yu Yang
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Yu-Xuan Li
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ren-Qi Yao
- Trauma Research Center, The Fourth Medical Center and Medical Innovation Research Division of the Chinese People‘s Liberation Army General Hospital, Beijing 100048, China
- Department of Burn Surgery, Changhai Hospital, Naval Medical University, Shanghai 200433, China
| | - Xiao-Hui Du
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Chao Ren
- Trauma Research Center, The Fourth Medical Center and Medical Innovation Research Division of the Chinese People‘s Liberation Army General Hospital, Beijing 100048, China
| |
Collapse
|
8
|
Yang H, Hu B. Application of artificial intelligence to endoscopy on common gastrointestinal benign diseases. Artif Intell Gastrointest Endosc 2021; 2:25-35. [DOI: 10.37126/aige.v2.i2.25] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 03/17/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) has been widely involved in every aspect of healthcare in the preclinical stage. In the digestive system, AI has been trained to assist auxiliary examinations including histopathology, endoscopy, ultrasonography, computerized tomography, and magnetic resonance imaging in detection, diagnosis, classification, differentiation, prognosis, and quality control. In the field of endoscopy, the application of AI, such as automatic detection, diagnosis, classification, and invasion depth, in early gastrointestinal (GI) cancers has received wide attention. There is a paucity of studies of AI application on common GI benign diseases based on endoscopy. In the review, we provide an overview of AI applications to endoscopy on common GI benign diseases including in the esophagus, stomach, intestine, and colon. It indicates that AI will gradually become an indispensable part of normal endoscopic detection and diagnosis of common GI benign diseases as clinical data, algorithms, and other related work are constantly repeated and improved.
Collapse
Affiliation(s)
- Hang Yang
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Bing Hu
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| |
Collapse
|
9
|
Kim DH, Wit H, Thurston M, Long M, Maskell GF, Strugnell MJ, Shetty D, Smith IM, Hollings NP. An artificial intelligence deep learning model for identification of small bowel obstruction on plain abdominal radiographs. Br J Radiol 2021; 94:20201407. [PMID: 33904763 PMCID: PMC8173678 DOI: 10.1259/bjr.20201407] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Objectives: Small bowel obstruction is a common surgical emergency which can lead to bowel necrosis, perforation and death. Plain abdominal X-rays are frequently used as a first-line test but the availability of immediate expert radiological review is variable. The aim was to investigate the feasibility of using a deep learning model for automated identification of small bowel obstruction. Methods: A total of 990 plain abdominal radiographs were collected, 445 with normal findings and 445 demonstrating small bowel obstruction. The images were labelled using the radiology reports, subsequent CT scans, surgical operation notes and enhanced radiological review. The data were used to develop a predictive model comprising an ensemble of five convolutional neural networks trained using transfer learning. Results: The performance of the model was excellent with an area under the receiver operator curve (AUC) of 0.961, corresponding to sensitivity and specificity of 91 and 93% respectively. Conclusion: Deep learning can be used to identify small bowel obstruction on plain radiographs with a high degree of accuracy. A system such as this could be used to alert clinicians to the presence of urgent findings with the potential for expedited clinical review and improved patient outcomes. Advances in knowledge: This paper describes a novel labelling method using composite clinical follow-up and demonstrates that ensemble models can be used effectively in medical imaging tasks. It also provides evidence that deep learning methods can be used to identify small bowel obstruction with high accuracy.
Collapse
Affiliation(s)
- D H Kim
- The Department of Clinical Imaging, The Royal Cornwall Hospitals NHS Trust, Truro, UK
| | - H Wit
- The Medical Imaging Department, University Hospitals Plymouth NHS Trust, Plymouth, UK
| | - M Thurston
- The Department of Clinical Imaging, The Royal Cornwall Hospitals NHS Trust, Truro, UK
| | - M Long
- The Department of Clinical Imaging, The Royal Cornwall Hospitals NHS Trust, Truro, UK.,The Medical Imaging Department, University Hospitals Plymouth NHS Trust, Plymouth, UK
| | - G F Maskell
- The Department of Clinical Imaging, The Royal Cornwall Hospitals NHS Trust, Truro, UK
| | - M J Strugnell
- The Department of Clinical Imaging, The Royal Cornwall Hospitals NHS Trust, Truro, UK
| | - D Shetty
- The Department of Clinical Imaging, The Royal Cornwall Hospitals NHS Trust, Truro, UK
| | - I M Smith
- The Department of General Surgery, The Royal Cornwall Hospitals NHS Trust, Truro, UK
| | - N P Hollings
- The Department of Clinical Imaging, The Royal Cornwall Hospitals NHS Trust, Truro, UK
| |
Collapse
|
10
|
An Accuracy Improvement Method Based on Multi-Source Information Fusion and Deep Learning for TSSC and Water Content Nondestructive Detection in “Luogang” Orange. ELECTRONICS 2021. [DOI: 10.3390/electronics10010080] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The objective of this study was to find an efficient method for measuring the total soluble solid content (TSSC) and water content of “Luogang” orange. Quick, accurate, and nondestructive detection tools (VIS/NIR spectroscopy, NIR spectroscopy, machine vision, and electronic nose), four data processing methods (Savitzky–Golay (SG), genetic algorithm (GA), multi-source information fusion (MIF), convolutional neural network (CNN) as the deep learning method, and a partial least squares regression (PLSR) modeling method) were compared and investigated. The results showed that the optimal TSSC detection method was based on VIS/NIR and machine vision data fusion and processing and modeling by SG + GA + CNN + PLSR. The R2 and RMSE of the TSSC detection results were 0.8580 and 0.4276, respectively. The optimal water content detection result was based on VIS/NIR data and processing and modeling by SG + GA + CNN + PLSR. The R2 and RMSE of the water content detection results were 0.7013 and 0.0063, respectively. This optimized method largely improved the internal quality detection accuracy of “Luogang” orange when compared to the data from a single detection tool with traditional data processing method, and provides a reference for the accuracy improvement of internal quality detection of other fruits.
Collapse
|
11
|
Kim M, Kim JS, Lee C, Kang BK. Detection of pneumoperitoneum in the abdominal radiograph images using artificial neural networks. Eur J Radiol Open 2020; 8:100316. [PMID: 33385018 PMCID: PMC7770533 DOI: 10.1016/j.ejro.2020.100316] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 12/15/2020] [Accepted: 12/16/2020] [Indexed: 12/15/2022] Open
Abstract
The purpose of this study was to assess the diagnostic performance of artificial neural networks (ANNs) to detect pneumoperitoneum in abdominal radiographs for the first time. This approach applied a novel deep-learning algorithm, a simple ANN training process without employing CNN, and also used ResNet-50, for comparison. By applying ResNet-50 to abdominal radiographs, we obtained an area under the ROC curve (AUC) of 0.916 and an accuracy of 85.0 % with a sensitivity of 85.7 % and a predictive value of the negative tests (NPV) of 91.7 %. Compared with CNN, our novel approach used extremely small ANN structures and a simple ANN training process. The diagnostic performance of our approach, with a sensitivity of 88.6 % and NPV of 91.3 %, was compared decently with that of ResNet-50.
Background/purpose The purpose of this study was to assess the diagnostic performance of artificial neural networks (ANNs) to detect pneumoperitoneum in abdominal radiographs for the first time. Materials and methods This approach applied a novel deep-learning algorithm, a simple ANN training process without employing a convolution neural network (CNN), and also used a widely utilized deep-learning method, ResNet-50, for comparison. Results By applying ResNet-50 to abdominal radiographs, we obtained an area under the ROC curve (AUC) of 0.916 and an accuracy of 85.0 % with a sensitivity of 85.7 % and a predictive value of the negative tests (NPV) of 91.7 %. Compared with the most commonly applied deep-learning methods such as a CNN, our novel approach used extremely small ANN structures and a simple ANN training process. The diagnostic performance of our approach, with a sensitivity of 88.6 % and NPV of 91.3 %, was compared decently with that of ResNet-50. Conclusions The results of this study showed that ANN-based computer-assisted diagnostics can be used to accurately detect pneumoperitoneum in abdominal radiographs, reduce the time delay in diagnosing urgent diseases such as pneumoperitoneum, and increase the effectiveness of clinical practice and patient care.
Collapse
Affiliation(s)
- Mimi Kim
- Department of Radiology, Hanyang University Seoul Hospital, Seoul, Republic of Korea
| | - Jong Soo Kim
- Institute for Software Convergence, Hanyang University, Seoul, Republic of Korea
| | - Changhwan Lee
- Department of Biomedical Engineering, Hanyang University, Seoul, Republic of Korea
| | - Bo-Kyeong Kang
- Department of Radiology, Hanyang University Seoul Hospital, Seoul, Republic of Korea
| |
Collapse
|
12
|
Deep learning algorithms for detecting and visualising intussusception on plain abdominal radiography in children: a retrospective multicenter study. Sci Rep 2020; 10:17582. [PMID: 33067505 PMCID: PMC7567788 DOI: 10.1038/s41598-020-74653-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 10/05/2020] [Indexed: 12/15/2022] Open
Abstract
This study aimed to verify a deep convolutional neural network (CNN) algorithm to detect intussusception in children using a human-annotated data set of plain abdominal X-rays from affected children. From January 2005 to August 2019, 1449 images were collected from plain abdominal X-rays of patients ≤ 6 years old who were diagnosed with intussusception while 9935 images were collected from patients without intussusception from three tertiary academic hospitals (A, B, and C data sets). Single Shot MultiBox Detector and ResNet were used for abdominal detection and intussusception classification, respectively. The diagnostic performance of the algorithm was analysed using internal and external validation tests. The internal test values after training with two hospital data sets were 0.946 to 0.971 for the area under the receiver operating characteristic curve (AUC), 0.927 to 0.952 for the highest accuracy, and 0.764 to 0.848 for the highest Youden index. The values from external test using the remaining data set were all lower (P-value < 0.001). The mean values of the internal test with all data sets were 0.935 and 0.743 for the AUC and Youden Index, respectively. Detection of intussusception by deep CNN and plain abdominal X-rays could aid in screening for intussusception in children.
Collapse
|
13
|
The automaton as a surgeon: the future of artificial intelligence in emergency and general surgery. Eur J Trauma Emerg Surg 2020; 47:757-762. [DOI: 10.1007/s00068-020-01444-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Accepted: 07/16/2020] [Indexed: 12/11/2022]
|
14
|
Can AI outperform a junior resident? Comparison of deep neural network to first-year radiology residents for identification of pneumothorax. Emerg Radiol 2020; 27:367-375. [DOI: 10.1007/s10140-020-01767-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Accepted: 02/24/2020] [Indexed: 10/23/2022]
|