1
|
Candemir S, Moranville R, Wong KA, Campbell W, Bigelow MT, Prevedello LM, Makary MS. Detecting and Characterizing Inferior Vena Cava Filters on Abdominal Computed Tomography with Data-Driven Computational Frameworks. J Digit Imaging 2023; 36:2507-2518. [PMID: 37770730 PMCID: PMC10584764 DOI: 10.1007/s10278-023-00882-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 06/27/2023] [Accepted: 07/05/2023] [Indexed: 09/30/2023] Open
Abstract
Two data-driven algorithms were developed for detecting and characterizing Inferior Vena Cava (IVC) filters on abdominal computed tomography to assist healthcare providers with the appropriate management of these devices to decrease complications: one based on 2-dimensional data and transfer learning (2D + TL) and an augmented version of the same algorithm which accounts for the 3-dimensional information leveraging recurrent convolutional neural networks (3D + RCNN). The study contains 2048 abdominal computed tomography studies obtained from 439 patients who underwent IVC filter placement during the 10-year period from January 1st, 2009, to January 1st, 2019. Among these, 399 patients had retrievable filters, and 40 had non-retrievable filter types. The reference annotations for the filter location were obtained through a custom-developed interface. The ground truth annotations for the filter types were determined based on the electronic medical record and physician review of imaging. The initial stage of the framework returns a list of locations containing metallic objects based on the density of the structure. The second stage processes the candidate locations and determines which one contains an IVC filter. The final stage of the pipeline classifies the filter types as retrievable vs. non-retrievable. The computational models are trained using Tensorflow Keras API on an Nvidia Quadro GV100 system. We utilized a fine-tuning supervised training strategy to conduct our experiments. We find that the system achieves high sensitivity on detecting the filter locations with a high confidence value. The 2D + TL model achieved a sensitivity of 0.911 and a precision of 0.804, and the 3D + RCNN model achieved a sensitivity of 0.923 and a precision of 0.853 for filter detection. The system confidence for the IVC location predictions is high: 0.993 for 2D + TL and 0.996 for 3D + RCNN. The filter type prediction component of the system achieved 0.945 sensitivity, 0.882 specificity, and 0.97 AUC score with 2D + TL and 0. 940 sensitivity, 0.927 specificity, and 0.975 AUC score with 3D + RCNN. With the intent to create tools to improve patient outcomes, this study describes the initial phase of a computational framework to support healthcare providers in detecting patients with retained IVC filters, so an individualized decision can be made to remove these devices when appropriate, to decrease complications. To our knowledge, this is the first study that curates abdominal computed tomography (CT) scans and presents an algorithm for automated detection and characterization of IVC filters.
Collapse
Affiliation(s)
- Sema Candemir
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH, 43210, USA.
- Laboratory for Augmented Intelligence in Imaging, The Ohio State University, Columbus, OH, 43210, USA.
| | - Robert Moranville
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH, 43210, USA
| | - Kelvin A Wong
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH, 43210, USA
- Laboratory for Augmented Intelligence in Imaging, The Ohio State University, Columbus, OH, 43210, USA
| | - Warren Campbell
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH, 43210, USA
| | - Matthew T Bigelow
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH, 43210, USA
- Laboratory for Augmented Intelligence in Imaging, The Ohio State University, Columbus, OH, 43210, USA
| | - Luciano M Prevedello
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH, 43210, USA
- Laboratory for Augmented Intelligence in Imaging, The Ohio State University, Columbus, OH, 43210, USA
| | - Mina S Makary
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH, 43210, USA
| |
Collapse
|
2
|
Kelly BS, Mathur P, Plesniar J, Lawlor A, Killeen RP. Using deep learning-derived image features in radiologic time series to make personalised predictions: proof of concept in colonic transit data. Eur Radiol 2023; 33:8376-8386. [PMID: 37284869 PMCID: PMC10244854 DOI: 10.1007/s00330-023-09769-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 04/04/2023] [Accepted: 04/19/2023] [Indexed: 06/08/2023]
Abstract
OBJECTIVES Siamese neural networks (SNN) were used to classify the presence of radiopaque beads as part of a colonic transit time study (CTS). The SNN output was then used as a feature in a time series model to predict progression through a CTS. METHODS This retrospective study included all patients undergoing a CTS in a single institution from 2010 to 2020. Data were partitioned in an 80/20 Train/Test split. Deep learning models based on a SNN architecture were trained and tested to classify images according to the presence, absence, and number of radiopaque beads and to output the Euclidean distance between the feature representations of the input images. Time series models were used to predict the total duration of the study. RESULTS In total, 568 images of 229 patients (143, 62% female, mean age 57) patients were included. For the classification of the presence of beads, the best performing model (Siamese DenseNET trained with a contrastive loss with unfrozen weights) achieved an accuracy, precision, and recall of 0.988, 0.986, and 1. A Gaussian process regressor (GPR) trained on the outputs of the SNN outperformed both GPR using only the number of beads and basic statistical exponential curve fitting with MAE of 0.9 days compared to 2.3 and 6.3 days (p < 0.05) respectively. CONCLUSIONS SNNs perform well at the identification of radiopaque beads in CTS. For time series prediction our methods were superior at identifying progression through the time series compared to statistical models, enabling more accurate personalised predictions. CLINICAL RELEVANCE STATEMENT Our radiologic time series model has potential clinical application in use cases where change assessment is critical (e.g. nodule surveillance, cancer treatment response, and screening programmes) by quantifying change and using it to make more personalised predictions. KEY POINTS • Time series methods have improved but application to radiology lags behind computer vision. Colonic transit studies are a simple radiologic time series measuring function through serial radiographs. • We successfully employed a Siamese neural network (SNN) to compare between radiographs at different points in time and then used the output of SNN as a feature in a Gaussian process regression model to predict progression through the time series. • This novel use of features derived from a neural network on medical imaging data to predict progression has potential clinical application in more complex use cases where change assessment is critical such as in oncologic imaging, monitoring for treatment response, and screening programmes.
Collapse
Affiliation(s)
- Brendan S Kelly
- Department of Radiology, St Vincent's University Hospital, Dublin, Ireland.
- Insight Centre for Data Analytics, UCD, Dublin, Ireland.
- School of Medicine, University College Dublin, Dublin, Ireland.
| | | | - Jan Plesniar
- School of Medicine, University College Dublin, Dublin, Ireland
| | | | - Ronan P Killeen
- Department of Radiology, St Vincent's University Hospital, Dublin, Ireland
- School of Medicine, University College Dublin, Dublin, Ireland
| |
Collapse
|
3
|
Guo X, Gichoya JW, Trivedi H, Purkayastha S, Banerjee I. MedShift: Automated Identification of Shift Data for Medical Image Dataset Curation. IEEE J Biomed Health Inform 2023; 27:3936-3947. [PMID: 37167055 PMCID: PMC10513895 DOI: 10.1109/jbhi.2023.3275104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Automated curation of noisy external data in the medical domain has long been in high demand, as AI technologies need to be validated using various sources with clean, annotated data. Identifying the variance between internal and external sources is a fundamental step in curating a high-quality dataset, as the data distributions from different sources can vary significantly and subsequently affect the performance of AI models. The primary challenges for detecting data shifts are - (1) accessing private data across healthcare institutions for manual detection and (2) the lack of automated approaches to learn efficient shift-data representation without training samples. To overcome these problems, we propose an automated pipeline called MedShift to detect top-level shift samples and evaluate the significance of shift data without sharing data between internal and external organizations. MedShift employs unsupervised anomaly detectors to learn the internal distribution and identify samples showing significant shiftness for external datasets, and then compares their performance. To quantify the effects of detected shift data, we train a multi-class classifier that learns internal domain knowledge and evaluates the classification performance for each class in external domains after dropping the shift data. We also propose a data quality metric to quantify the dissimilarity between internal and external datasets. We verify the efficacy of MedShift using musculoskeletal radiographs (MURA) and chest X-ray datasets from multiple external sources. Our experiments show that our proposed shift data detection pipeline can be beneficial for medical centers to curate high-quality datasets more efficiently.
Collapse
|
4
|
Mongan J, Kohli MD, Houshyar R, Chang PD, Glavis-Bloom J, Taylor AG. Automated detection of IVC filters on radiographs with deep convolutional neural networks. Abdom Radiol (NY) 2023; 48:758-764. [PMID: 36371471 PMCID: PMC9902407 DOI: 10.1007/s00261-022-03734-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 10/26/2022] [Accepted: 10/26/2022] [Indexed: 11/13/2022]
Abstract
PURPOSE To create an algorithm able to accurately detect IVC filters on radiographs without human assistance, capable of being used to screen radiographs to identify patients needing IVC filter retrieval. METHODS A primary dataset of 5225 images, 30% of which included IVC filters, was assembled and annotated. 85% of the data was used to train a Cascade R-CNN (Region Based Convolutional Neural Network) object detection network incorporating a pre-trained ResNet-50 backbone. The remaining 15% of the data, independently annotated by three radiologists, was used as a test set to assess performance. The algorithm was also assessed on an independently constructed 1424-image dataset, drawn from a different institution than the primary dataset. RESULTS On the primary test set, the algorithm achieved a sensitivity of 96.2% (95% CI 92.7-98.1%) and a specificity of 98.9% (95% CI 97.4-99.5%). Results were similar on the external test set: sensitivity 97.9% (95% CI 96.2-98.9%), specificity 99.6 (95% CI 98.9-99.9%). CONCLUSION Fully automated detection of IVC filters on radiographs with high sensitivity and excellent specificity required for an automated screening system can be achieved using object detection neural networks. Further work will develop a system for identifying patients for IVC filter retrieval based on this algorithm.
Collapse
Affiliation(s)
- John Mongan
- Department of Radiology and Biomedical Imaging, Center for Intelligent Imaging, University of California San Francisco, 505 Parnassus Avenue, San Francisco, CA, 94143-0628, USA.
| | - Marc D. Kohli
- Department of Radiology and Biomedical Imaging, Center for Intelligent Imaging, University of California San Francisco, 505 Parnassus Avenue, San Francisco, CA 94143-0628 USA
| | - Roozbeh Houshyar
- Department of Radiological Sciences, Center for Artificial Intelligence in Diagnostic Medicine, University of California Irvine, Irvine, USA
| | - Peter D. Chang
- Department of Radiological Sciences, Center for Artificial Intelligence in Diagnostic Medicine, University of California Irvine, Irvine, USA
| | - Justin Glavis-Bloom
- Department of Radiological Sciences, Center for Artificial Intelligence in Diagnostic Medicine, University of California Irvine, Irvine, USA
| | - Andrew G. Taylor
- Department of Radiology and Biomedical Imaging, Center for Intelligent Imaging, University of California San Francisco, 505 Parnassus Avenue, San Francisco, CA 94143-0628 USA
| |
Collapse
|
5
|
Gomes R, Kamrowski C, Mohan PD, Senor C, Langlois J, Wildenberg J. Application of Deep Learning to IVC Filter Detection from CT Scans. Diagnostics (Basel) 2022; 12:diagnostics12102475. [PMID: 36292164 PMCID: PMC9600884 DOI: 10.3390/diagnostics12102475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Revised: 10/05/2022] [Accepted: 10/10/2022] [Indexed: 11/16/2022] Open
Abstract
IVC filters (IVCF) perform an important function in select patients that have venous blood clots. However, they are usually intended to be temporary, and significant delay in removal can have negative health consequences for the patient. Currently, all Interventional Radiology (IR) practices are tasked with tracking patients in whom IVCF are placed. Due to their small size and location deep within the abdomen it is common for patients to forget that they have an IVCF. Therefore, there is a significant delay for a new healthcare provider to become aware of the presence of a filter. Patients may have an abdominopelvic CT scan for many reasons and, fortunately, IVCF are clearly visible on these scans. In this research a deep learning model capable of segmenting IVCF from CT scan slices along the axial plane is developed. The model achieved a Dice score of 0.82 for training over 372 CT scan slices. The segmentation model is then integrated with a prediction algorithm capable of flagging an entire CT scan as having IVCF. The prediction algorithm utilizing the segmentation model achieved a 92.22% accuracy at detecting IVCF in the scans.
Collapse
Affiliation(s)
- Rahul Gomes
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA
- Correspondence: (R.G.); (J.W.)
| | - Connor Kamrowski
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA
| | - Pavithra Devy Mohan
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA
| | - Cameron Senor
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA
| | - Jordan Langlois
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA
| | - Joseph Wildenberg
- Interventional Radiology, Mayo Clinic Health System, Eau Claire, WI 54703, USA
- Correspondence: (R.G.); (J.W.)
| |
Collapse
|
6
|
Artificial Intelligence Evidence-Based Current Status and Potential for Lower Limb Vascular Management. J Pers Med 2021; 11:jpm11121280. [PMID: 34945749 PMCID: PMC8705683 DOI: 10.3390/jpm11121280] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 11/22/2021] [Accepted: 11/24/2021] [Indexed: 12/14/2022] Open
Abstract
Consultation prioritization is fundamental in optimal healthcare management and its performance can be helped by artificial intelligence (AI)-dedicated software and by digital medicine in general. The need for remote consultation has been demonstrated not only in the pandemic-induced lock-down but also in rurality conditions for which access to health centers is constantly limited. The term “AI” indicates the use of a computer to simulate human intellectual behavior with minimal human intervention. AI is based on a “machine learning” process or on an artificial neural network. AI provides accurate diagnostic algorithms and personalized treatments in many fields, including oncology, ophthalmology, traumatology, and dermatology. AI can help vascular specialists in diagnostics of peripheral artery disease, cerebrovascular disease, and deep vein thrombosis by analyzing contrast-enhanced magnetic resonance imaging or ultrasound data and in diagnostics of pulmonary embolism on multi-slice computed angiograms. Automatic methods based on AI may be applied to detect the presence and determine the clinical class of chronic venous disease. Nevertheless, data on using AI in this field are still scarce. In this narrative review, the authors discuss available data on AI implementation in arterial and venous disease diagnostics and care.
Collapse
|
7
|
Artificial intelligence, machine learning, vascular surgery, automatic image processing. Implications for clinical practice. ANGIOLOGIA 2020. [DOI: 10.20960/angiologia.00177] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|