1
|
Rasenberg DWM, Ramaekers M, Jacobs I, Pluyter JR, Geurts LJF, Yu B, van der Ven JCP, Nederend J, de Hingh IHJT, Bonsing BA, Vahrmeijer AL, van der Harst E, den Dulk M, van Dam RM, Groot Koerkamp B, Erdmann JI, Daams F, Busch OR, Besselink MG, te Riele WW, Reinhard R, Jansen FW, Dankelman J, Mieog JSD, Luyer MDP. Computer-Aided Decision Support and 3D Models in Pancreatic Cancer Surgery: A Pilot Study. J Clin Med 2025; 14:1567. [PMID: 40099616 PMCID: PMC11899912 DOI: 10.3390/jcm14051567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2024] [Revised: 02/07/2025] [Accepted: 02/20/2025] [Indexed: 03/20/2025] Open
Abstract
Background: Preoperative planning of patients diagnosed with pancreatic head cancer is difficult and requires specific expertise. This pilot study assesses the added value of three-dimensional (3D) patient models and computer-aided detection (CAD) algorithms in determining the resectability of pancreatic head tumors. Methods: This study included 14 hepatopancreatobiliary experts from eight hospitals. The participants assessed three radiologically resectable and three radiologically borderline resectable cases in a simulated setting via crossover design. Groups were divided in controls (using a CT scan), a 3D group (using a CT scan and 3D models), and a CAD group (using a CT scan, 3D and CAD). For the perceived fulfillment of preoperative needs, the quality and confidence of clinical decision-making were evaluated. Results: A higher perceived ability to determine degrees and the length of tumor-vessel contact was reported in the CAD group compared to controls (p = 0.022 and p = 0.003, respectively). Lower degrees of tumor-vessel contact were predicted for radiologically borderline resectable tumors in the CAD group compared to controls (p = 0.037). Higher confidence levels were observed in predicting the need for vascular resection in the 3D group compared to controls (p = 0.033) for all cases combined. Conclusions: "CAD (including 3D) improved experts' perceived ability to accurately assess vessel involvement and supports the development of evolving techniques that may enhance the diagnosis and treatment of pancreatic cancer".
Collapse
Affiliation(s)
- Diederik W. M. Rasenberg
- Faculty of BioMechanical Engineering, Delft University of Technology, 2628 CE Delft, The Netherlands; (D.W.M.R.); (F.W.J.); (J.D.)
- Department of Experience Design, Philips, 5656 AE Eindhoven, The Netherlands; (J.R.P.); (L.J.F.G.); (B.Y.)
| | - Mark Ramaekers
- Department of Surgery, Catharina Hospital, 5623 EJ Eindhoven, The Netherlands; (I.H.J.T.d.H.); (M.D.P.L.)
| | - Igor Jacobs
- Department of Hospital Services & Informatics, Philips Research, 5656 AE Eindhoven, The Netherlands; (I.J.); (J.C.P.v.d.V.)
| | - Jon R. Pluyter
- Department of Experience Design, Philips, 5656 AE Eindhoven, The Netherlands; (J.R.P.); (L.J.F.G.); (B.Y.)
| | - Luc J. F. Geurts
- Department of Experience Design, Philips, 5656 AE Eindhoven, The Netherlands; (J.R.P.); (L.J.F.G.); (B.Y.)
| | - Bin Yu
- Department of Experience Design, Philips, 5656 AE Eindhoven, The Netherlands; (J.R.P.); (L.J.F.G.); (B.Y.)
| | - John C. P. van der Ven
- Department of Hospital Services & Informatics, Philips Research, 5656 AE Eindhoven, The Netherlands; (I.J.); (J.C.P.v.d.V.)
| | - Joost Nederend
- Department of Radiology, Catharina Hospital, 5623 EJ Eindhoven, The Netherlands;
| | - Ignace H. J. T. de Hingh
- Department of Surgery, Catharina Hospital, 5623 EJ Eindhoven, The Netherlands; (I.H.J.T.d.H.); (M.D.P.L.)
- GROW—School for Oncology and Developmental Biology, Maastricht University, 6229 ER Maastricht, The Netherlands
| | - Bert A. Bonsing
- Department of Surgery, Leiden University Medical Centre, 2300 RC Leiden, The Netherlands; (B.A.B.); (A.L.V.); (J.S.D.M.)
| | - Alexander L. Vahrmeijer
- Department of Surgery, Leiden University Medical Centre, 2300 RC Leiden, The Netherlands; (B.A.B.); (A.L.V.); (J.S.D.M.)
| | - Erwin van der Harst
- Department of Surgery, Maasstad Hospital, 3079 DZ Rotterdam, The Netherlands;
| | - Marcel den Dulk
- Department of Surgery, Maastricht University Medical Centre+, 6229 ER Maastricht, The Netherlands; (M.d.D.); (R.M.v.D.)
| | - Ronald M. van Dam
- Department of Surgery, Maastricht University Medical Centre+, 6229 ER Maastricht, The Netherlands; (M.d.D.); (R.M.v.D.)
| | - Bas Groot Koerkamp
- Department of Surgery, Erasmus Medical Centre, 3015 GD Rotterdam, The Netherlands;
| | - Joris I. Erdmann
- Department of Surgery, Amsterdam University Medical Centers, 1081 HV Amsterdam, The Netherlands; (J.I.E.); (F.D.); (O.R.B.); (M.G.B.)
| | - Freek Daams
- Department of Surgery, Amsterdam University Medical Centers, 1081 HV Amsterdam, The Netherlands; (J.I.E.); (F.D.); (O.R.B.); (M.G.B.)
| | - Olivier R. Busch
- Department of Surgery, Amsterdam University Medical Centers, 1081 HV Amsterdam, The Netherlands; (J.I.E.); (F.D.); (O.R.B.); (M.G.B.)
| | - Marc G. Besselink
- Department of Surgery, Amsterdam University Medical Centers, 1081 HV Amsterdam, The Netherlands; (J.I.E.); (F.D.); (O.R.B.); (M.G.B.)
| | - Wouter W. te Riele
- Department of Surgery, St. Antonius Hospital, 3435 CM Nieuwegein, The Netherlands;
| | - Rinze Reinhard
- Department of Radiology, Onze Lieve Vrouwe Gasthuis (loc. West), 1091 AC Amsterdam, The Netherlands;
| | - Frank Willem Jansen
- Faculty of BioMechanical Engineering, Delft University of Technology, 2628 CE Delft, The Netherlands; (D.W.M.R.); (F.W.J.); (J.D.)
- Department of Surgery, Leiden University Medical Center, 2333 ZA Leiden, The Netherlands
| | - Jenny Dankelman
- Faculty of BioMechanical Engineering, Delft University of Technology, 2628 CE Delft, The Netherlands; (D.W.M.R.); (F.W.J.); (J.D.)
| | - J. Sven D. Mieog
- Department of Surgery, Leiden University Medical Centre, 2300 RC Leiden, The Netherlands; (B.A.B.); (A.L.V.); (J.S.D.M.)
| | - Misha D. P. Luyer
- Department of Surgery, Catharina Hospital, 5623 EJ Eindhoven, The Netherlands; (I.H.J.T.d.H.); (M.D.P.L.)
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands
| |
Collapse
|
2
|
Mathys A, Pollet Y, Gressin A, Muth X, Brecko J, Dekoninck W, Vandenspiegel D, Jodogne S, Semal P. Sphaeroptica: A tool for pseudo-3D visualization and 3D measurements on arthropods. PLoS One 2024; 19:e0311887. [PMID: 39441808 PMCID: PMC11498740 DOI: 10.1371/journal.pone.0311887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 09/26/2024] [Indexed: 10/25/2024] Open
Abstract
Natural history collections are invaluable reference collections. Digitizing these collections is a transformative process that improves the accessibility, preservation, and exploitation of specimens and associated data in the long term. Arthropods make up the majority of zoological collections. However, arthropods are small, have detailed color textures and share small, complex and shiny structures, which poses a challenge to conventional digitization methods. Sphaeroptica is a multi-images viewer that uses a sphere of oriented images. It allows the visualization of insects including their tiniest features, the positioning of landmarks, and the extraction of 3D coordinates for measuring linear distances or for use in geometric morphometrics analysis. The quantitative comparisons show that the measures obtained with Sphaeroptica are similar to the measurements derived from 3D μCT models with an average difference inferior to 1%, while featuring the high resolution of color stacked pictures with all details like setae, chaetae, scales, and other small and/or complex structures. Shaeroptica was developed for the digitization of small arthropods but it can be used with any sphere of aligned images resulting from the digitization of objects or specimens with complex surface and shining, black, or translucent texture which cannot easily be digitized using structured light scanner or Structure-from-Motion (SfM) photogrammetry.
Collapse
Affiliation(s)
- Aurore Mathys
- Scientific Service of Heritage, Royal Belgian Institute of Natural Sciences, Brussels, Belgium
- Collections Management, Royal Museum for Central Africa, Tervuren, Belgium
- Documentation, Interpretation & VAlorisation of Heritage, ULiège, Liège, Belgium
| | - Yann Pollet
- Scientific Service of Heritage, Royal Belgian Institute of Natural Sciences, Brussels, Belgium
| | - Adrien Gressin
- School of Engineering and Management Vaud, HES-SO University of Applied Sciences and Art Western, Yverdon-les-Bains, Switzerland
| | - Xavier Muth
- School of Engineering and Management Vaud, HES-SO University of Applied Sciences and Art Western, Yverdon-les-Bains, Switzerland
| | - Jonathan Brecko
- Scientific Service of Heritage, Royal Belgian Institute of Natural Sciences, Brussels, Belgium
- Collections Management, Royal Museum for Central Africa, Tervuren, Belgium
| | - Wouter Dekoninck
- Scientific Service of Heritage, Royal Belgian Institute of Natural Sciences, Brussels, Belgium
| | | | - Sébastien Jodogne
- Institute of Information and Communication, Technologies Electronics and Applied Mathematics (ICTEAM), UCLouvain, Louvain-la-Neuve, Belgium
| | - Patrick Semal
- Scientific Service of Heritage, Royal Belgian Institute of Natural Sciences, Brussels, Belgium
| |
Collapse
|
3
|
Kluckert J, Hötker AM, Da Mutten R, Konukoglu E, Donati OF. AI-based automated evaluation of image quality and protocol tailoring in patients undergoing MRI for suspected prostate cancer. Eur J Radiol 2024; 177:111581. [PMID: 38925042 DOI: 10.1016/j.ejrad.2024.111581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 06/08/2024] [Accepted: 06/16/2024] [Indexed: 06/28/2024]
Abstract
PURPOSE To develop and validate an artificial intelligence (AI) application in a clinical setting to decide whether dynamic contrast-enhanced (DCE) sequences are necessary in multiparametric prostate MRI. METHODS This study was approved by the institutional review board and requirement for study-specific informed consent was waived. A mobile app was developed to integrate AI-based image quality analysis into clinical workflow. An expert radiologist provided reference decisions. Diagnostic performance parameters (sensitivity and specificity) were calculated and inter-reader agreement was evaluated. RESULTS Fully automated evaluation was possible in 87% of cases, with the application reaching a sensitivity of 80% and a specificity of 100% in selecting patients for multiparametric MRI. In 2% of patients, the application falsely decided on omitting DCE. With a technician reaching a sensitivity of 29% and specificity of 98%, and resident radiologists reaching sensitivity of 29% and specificity of 93%, the use of the application allowed a significant increase in sensitivity. CONCLUSION The presented AI application accurately decides on a patient-specific MRI protocol based on image quality analysis, potentially allowing omission of DCE in the diagnostic workup of patients with suspected prostate cancer. This could streamline workflow and optimize time utilization of healthcare professionals.
Collapse
Affiliation(s)
- Jonas Kluckert
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091 Zurich, Switzerland.
| | - Andreas M Hötker
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091 Zurich, Switzerland
| | - Raffaele Da Mutten
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091 Zurich, Switzerland
| | - Ender Konukoglu
- Computer Vision Laboratory, Department of Information Technology and Electrical Engineering, ETH Zurich, Sternwartstrasse 7, 8092 Zurich, Switzerland
| | - Olivio F Donati
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, 8091 Zurich, Switzerland; Radiology Octorad / Hirslanden, Witellikerstrasse 40, 8032 Zurich, Switzerland
| |
Collapse
|
4
|
Peltonen JI, Honkanen AP, Kortesniemi M. Quality assurance framework for rapid automatic analysis deployment in medical imaging. Phys Med 2023; 116:103173. [PMID: 38000100 DOI: 10.1016/j.ejmp.2023.103173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 09/29/2023] [Accepted: 11/16/2023] [Indexed: 11/26/2023] Open
Abstract
PURPOSE Automatic image analysis algorithms have an increasing role in clinical quality assurance (QA) in medical imaging. Although the implementation of QA calculation algorithms may be straightforward at the development level, actual deployment of a new method to clinical routine may require substantial additional effort from supporting services. We sought to develop a multimodal system that enables rapid implementation of new QA analysis methods in clinical practice. METHODS The QA system was built using freely available open-source software libraries. The included features were results database, database interface, interactive user interface, e-mail error dispatcher, data processing backend, and DICOM server. An in-house database interface was built, providing the developers of analyses with simple access to the results database. An open-source DICOM server was used for image traffic and automatic initiation of modality-specific QA image analyses. RESULTS The QA framework enabled rapid adaptation of new analysis methods to automatic image processing workflows. The system provided online data review via an easily accessible user interface. In case of deviations, the system supported simultaneous review of the results for the user and QA expert to trigger corrective actions. In particular, embedded error thresholds, trend analyses, and error-feedback channels were provided to facilitate continuous monitoring and to enable pre-emptive corrective actions. CONCLUSION An effective and novel QA framework incorporating easy adaptation and scalability to automated image analysis methods was developed. The framework provides an efficient and responsive web-based tool to manage the normal operation, trends, errors, and abnormalities in medical image quality.
Collapse
Affiliation(s)
- Juha I Peltonen
- HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, Helsinki, Finland.
| | - Ari-Pekka Honkanen
- HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | - Mika Kortesniemi
- HUS Medical Imaging Center, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| |
Collapse
|
5
|
Roberts M, Hinton G, Wells AJ, Van Der Veken J, Bajger M, Lee G, Liu Y, Chong C, Poonnoose S, Agzarian M, To MS. Imaging evaluation of a proposed 3D generative model for MRI to CT translation in the lumbar spine. Spine J 2023; 23:1602-1612. [PMID: 37479140 DOI: 10.1016/j.spinee.2023.06.399] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 06/22/2023] [Accepted: 06/29/2023] [Indexed: 07/23/2023]
Abstract
BACKGROUND CONTEXT A computed tomography (CT) and magnetic resonance imaging (MRI) are used routinely in the radiologic evaluation and surgical planning of patients with lumbar spine pathology, with the modalities being complimentary. We have developed a deep learning algorithm which can produce 3D lumbar spine CT images from MRI data alone. This has the potential to reduce radiation to the patient as well as burden on the health care system. PURPOSE The purpose of this study is to evaluate the accuracy of the synthetic lumbar spine CT images produced using our deep learning model. STUDY DESIGN A training set of 400 unpaired CTs and 400 unpaired MRI scans of the lumbar spine was used to train a supervised 3D cycle-Gan model. Evaluators performed a set of clinically relevant measurements on 20 matched synthetic CTs and true CTs. These measurements were then compared to assess the accuracy of the synthetic CTs. PATIENT SAMPLE The evaluation data set consisted of 20 patients who had CT and MRI scans performed within a 30-day period of each other. All patient data was deidentified. Notable exclusions included artefact from patient motion, metallic implants or any intervention performed in the 30 day intervening period. OUTCOME MEASURES The outcome measured was the mean difference in measurements performed by the group of evaluators between real CT and synthetic CTs in terms of absolute and relative error. METHODS Data from the 20 MRI scans was supplied to our deep learning model which produced 20 "synthetic CT" scans. This formed the evaluation data set. Four clinical evaluators consisting of neurosurgeons and radiologists performed a set of 24 clinically relevant measurements on matched synthetic CT and true CTs in 20 patients. A test set of measurements were performed prior to commencing data collection to identify any significant interobserver variation in measurement technique. RESULTS The measurements performed in the sagittal plane were all within 10% relative error with the majority within 5% relative error. The pedicle measurements performed in the axial plane were considerably less accurate with a relative error of up to 34%. CONCLUSIONS The computer generated synthetic CTs demonstrated a high level of accuracy for the measurements performed in-plane to the original MRIs used for synthesis. The measurements performed on the axial reconstructed images were less accurate, attributable to the images being synthesized from nonvolumetric routine sagittal T1-weighted MRI sequences. It is hypothesized that if axial sequences or volumetric data were input into the algorithm these measurements would have improved accuracy.
Collapse
Affiliation(s)
- Makenze Roberts
- South Australia Medical Imaging, Flinders Medical Centre, Adelaide, South Australia, Australia.
| | - George Hinton
- South Australia Medical Imaging, Flinders Medical Centre, Adelaide, South Australia, Australia
| | - Adam J Wells
- Department of Neurosurgery, Royal Adelaide Hospital, Adelaide, South Australia, Australia; Faculty of Health and Medical Sciences, University of Adelaide, Adelaide, South Australia, Australia
| | - Jorn Van Der Veken
- Department of Neurosurgery, Flinders Medical Centre, Adelaide, South Australia, Australia
| | - Mariusz Bajger
- College of Science and Engineering, Flinders University, Adelaide, South Australia, Australia
| | - Gobert Lee
- College of Science and Engineering, Flinders University, Adelaide, South Australia, Australia
| | - Yifan Liu
- The Australian Institute for Machine Learning, University of Adelaide, Adelaide, South Australia, Australia
| | - Chee Chong
- South Australia Medical Imaging, Flinders Medical Centre, Adelaide, South Australia, Australia
| | - Santosh Poonnoose
- Department of Neurosurgery, Flinders Medical Centre, Adelaide, South Australia, Australia
| | - Marc Agzarian
- South Australia Medical Imaging, Flinders Medical Centre, Adelaide, South Australia, Australia
| | - Minh-Son To
- South Australia Medical Imaging, Flinders Medical Centre, Adelaide, South Australia, Australia; The Australian Institute for Machine Learning, University of Adelaide, Adelaide, South Australia, Australia; Flinders Health and Medical Research Institute, Flinders University, Adelaide, South Australia, Australia
| |
Collapse
|
6
|
Prince E, Hankinson TC, Görg C. EASL: A Framework for Designing, Implementing, and Evaluating ML Solutions in Clinical Healthcare Settings. PROCEEDINGS OF MACHINE LEARNING RESEARCH 2023; 219:612-630. [PMID: 38988337 PMCID: PMC11235083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/12/2024]
Abstract
We introduce the Explainable Analytical Systems Lab (EASL) framework, an end-to-end solution designed to facilitate the development, implementation, and evaluation of clinical machine learning (ML) tools. EASL is highly versatile and applicable to a variety of contexts and includes resources for data management, ML model development, visualization and user interface development, service hosting, and usage analytics. To demonstrate its practical applications, we present the EASL framework in the context of a case study: designing and evaluating a deep learning classifier to predict diagnoses from medical imaging. The framework is composed of three modules, each with their own set of resources. The Workbench module stores data and develops initial ML models, the Canvas module contains a medical imaging viewer and web development framework, and the Studio module hosts the ML model and provides web analytics and support for conducting user studies. EASL encourages model developers to take a holistic view by integrating the model development, implementation, and evaluation into one framework, and thus ensures that models are both effective and reliable when used in a clinical setting. EASL contributes to our understanding of machine learning applied to healthcare by providing a comprehensive framework that makes it easier to develop and evaluate ML tools within a clinical setting.
Collapse
Affiliation(s)
- Eric Prince
- Computational Bioscience Program, Morgan Adams Foundation for Pediatric Brain Tumor Research Program, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Todd C Hankinson
- Department of Neurosurgery, Morgan Adams Foundation for Pediatric Brain Tumor Research Program, Children's Hospital Colorado, Aurora, Colorado, USA
| | - Carsten Görg
- Department of Biostatistics & Informatics, Morgan Adams Foundation for Pediatric Brain Tumor Research Program, Colorado School of Public Health, Aurora, Colorado, USA
| |
Collapse
|
7
|
Li XT, Allen JW, Hu R. Implementation of Automated Pipeline for Resting-State fMRI Analysis with PACS Integration. J Digit Imaging 2023; 36:1189-1197. [PMID: 36596936 PMCID: PMC10287855 DOI: 10.1007/s10278-022-00758-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 12/07/2022] [Accepted: 12/12/2022] [Indexed: 01/04/2023] Open
Abstract
In recent years, the quantity and complexity of medical imaging acquisition and processing have increased tremendously. The explosion in volume and need for advanced imaging analysis have led to the creation of numerous software programs, which have begun to be incorporated into clinical practice for indications such as automated stroke assessment, brain tumor perfusion processing, and hippocampal volume analysis. Despite these advances, there remains a need for specialized, custom-built software for advanced algorithms and new areas of research that is not widely available or adequately integrated in these "out-of-the-box" solutions. The purpose of this paper is to describe the implementation of an image-processing pipeline that is versatile and simple to create, which allows for rapid prototyping of image analysis algorithms and subsequent testing in a clinical environment. This pipeline uses a combination of Orthanc server, custom MATLAB code, and publicly available FMRIB Software Library and RestNeuMap tools to automatically receive and analyze resting-state functional MRI data collected from a custom filter on the MR scanner output. The processed files are then sent directly to Picture Archiving and Communications System (PACS) without the need for user input. This initial experience can serve as a framework for those interested in simple implementation of an automated pipeline customized to clinical needs.
Collapse
Affiliation(s)
- Xiao T Li
- Department of Radiology and Imaging Sciences, Emory University Hospital, Atlanta, GA, USA.
| | - Jason W Allen
- Department of Radiology and Imaging Sciences, Emory University Hospital, Atlanta, GA, USA
- Department of Neurology, Emory University Hospital, Atlanta, GA, USA
| | - Ranliang Hu
- Department of Radiology and Imaging Sciences, Emory University Hospital, Atlanta, GA, USA
| |
Collapse
|
8
|
Gorman C, Punzo D, Octaviano I, Pieper S, Longabaugh WJR, Clunie DA, Kikinis R, Fedorov AY, Herrmann MD. Interoperable slide microscopy viewer and annotation tool for imaging data science and computational pathology. Nat Commun 2023; 14:1572. [PMID: 36949078 PMCID: PMC10033920 DOI: 10.1038/s41467-023-37224-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 03/08/2023] [Indexed: 03/24/2023] Open
Abstract
The exchange of large and complex slide microscopy imaging data in biomedical research and pathology practice is impeded by a lack of data standardization and interoperability, which is detrimental to the reproducibility of scientific findings and clinical integration of technological innovations. We introduce Slim, an open-source, web-based slide microscopy viewer that implements the internationally accepted Digital Imaging and Communications in Medicine (DICOM) standard to achieve interoperability with a multitude of existing medical imaging systems. We showcase the capabilities of Slim as the slide microscopy viewer of the NCI Imaging Data Commons and demonstrate how the viewer enables interactive visualization of traditional brightfield microscopy and highly-multiplexed immunofluorescence microscopy images from The Cancer Genome Atlas and Human Tissue Atlas Network, respectively, using standard DICOMweb services. We further show how Slim enables the collection of standardized image annotations for the development or validation of machine learning models and the visual interpretation of model inference results in the form of segmentation masks, spatial heat maps, or image-derived measurements.
Collapse
Affiliation(s)
- Chris Gorman
- Department of Pathology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | | | | | | | | | | | - Ron Kikinis
- Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA
| | - Andrey Y Fedorov
- Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA.
| | - Markus D Herrmann
- Department of Pathology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
9
|
Jahn F, Ammenwerth E, Dornauer V, Höffner K, Bindel M, Karopka T, Winter A. A Linked Open Data-Based Terminology to Describe Libre/Free and Open-source Software: Incremental Development Study. JMIR Med Inform 2023; 11:e38861. [PMID: 36662569 PMCID: PMC9898829 DOI: 10.2196/38861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 10/28/2022] [Accepted: 11/09/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND There is a variety of libre/free and open-source software (LIFOSS) products for medicine and health care. To support health care and IT professionals select an appropriate software product for given tasks, several comparison studies and web platforms, such as Medfloss.org, are available. However, due to the lack of a uniform terminology for health informatics, ambiguous or imprecise terms are used to describe the functionalities of LIFOSS. This makes comparisons of LIFOSS difficult and may lead to inappropriate software selection decisions. Using Linked Open Data (LOD) promises to address these challenges. OBJECTIVE We describe LIFOSS systematically with the help of the underlying Health Information Technology Ontology (HITO). We publish HITO and HITO-based software product descriptions using LOD to obtain the following benefits: (1) linking and reusing existing terminologies and (2) using Semantic Web tools for viewing and querying the LIFOSS data on the World Wide Web. METHODS HITO was incrementally developed and implemented. First, classes for the description of software products in health IT evaluation studies were identified. Second, requirements for describing LIFOSS were elicited by interviewing domain experts. Third, to describe domain-specific functionalities of software products, existing catalogues of features and enterprise functions were analyzed and integrated into the HITO knowledge base. As a proof of concept, HITO was used to describe 25 LIFOSS products. RESULTS HITO provides a defined set of classes and their relationships to describe LIFOSS in medicine and health care. With the help of linked or integrated catalogues for languages, programming languages, licenses, features, and enterprise functions, the functionalities of LIFOSS can be precisely described and compared. We publish HITO and the LIFOSS descriptions as LOD; they can be queried and viewed using different Semantic Web tools, such as Resource Description Framework (RDF) browsers, SPARQL Protocol and RDF Query Language (SPARQL) queries, and faceted searches. The advantages of providing HITO as LOD are demonstrated by practical examples. CONCLUSIONS HITO is a building block to achieving unambiguous communication among health IT professionals and researchers. Providing LIFOSS product information as LOD enables barrier-free and easy access to data that are often hidden in user manuals of software products or are not available at all. Efforts to establish a unique terminology of medical and health informatics should be further supported and continued.
Collapse
Affiliation(s)
- Franziska Jahn
- Institute of Medical Informatics, Statistics and Epidemiology, Faculty of Medicine, Leipzig University, Leipzig, Germany
| | - Elske Ammenwerth
- Institute of Medical Informatics, University for Health Sciences, Medical Informatics and Technology, Hall in Tirol, Austria
| | - Verena Dornauer
- Institute of Medical Informatics, University for Health Sciences, Medical Informatics and Technology, Hall in Tirol, Austria
| | - Konrad Höffner
- Institute of Medical Informatics, Statistics and Epidemiology, Faculty of Medicine, Leipzig University, Leipzig, Germany
| | - Michelle Bindel
- Institute of Medical Informatics, University for Health Sciences, Medical Informatics and Technology, Hall in Tirol, Austria
| | | | - Alfred Winter
- Institute of Medical Informatics, Statistics and Epidemiology, Faculty of Medicine, Leipzig University, Leipzig, Germany
| |
Collapse
|
10
|
Kawa J, Pyciński B, Smoliński M, Bożek P, Kwasecki M, Pietrzyk B, Szymański D. Design and Implementation of a Cloud PACS Architecture. SENSORS (BASEL, SWITZERLAND) 2022; 22:8569. [PMID: 36366266 PMCID: PMC9654824 DOI: 10.3390/s22218569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/28/2022] [Accepted: 11/04/2022] [Indexed: 06/16/2023]
Abstract
The limitations of the classic PACS (picture archiving and communication system), such as the backward-compatible DICOM network architecture and poor security and maintenance, are well-known. They are challenged by various existing solutions employing cloud-related patterns and services. However, a full-scale cloud-native PACS has not yet been demonstrated. The paper introduces a vendor-neutral cloud PACS architecture. It is divided into two main components: a cloud platform and an access device. The cloud platform is responsible for nearline (long-term) image archive, data flow, and backend management. It operates in multi-tenant mode. The access device is responsible for the local DICOM (Digital Imaging and Communications in Medicine) interface and serves as a gateway to cloud services. The cloud PACS was first implemented in an Amazon Web Services environment. It employs a number of general-purpose services designed or adapted for a cloud environment, including Kafka, OpenSearch, and Memcached. Custom services, such as a central PACS node, queue manager, or flow worker, also developed as cloud microservices, bring DICOM support, external integration, and a management layer. The PACS was verified using image traffic from, among others, computed tomography (CT), magnetic resonance (MR), and computed radiography (CR) modalities. During the test, the system was reliably storing and accessing image data. In following tests, scaling behavior differences between the monolithic Dcm4chee server and the proposed solution are shown. The growing number of parallel connections did not influence the monolithic server's overall throughput, whereas the performance of cloud PACS noticeably increased. In the final test, different retrieval patterns were evaluated to assess performance under different scenarios. The current production environment stores over 450 TB of image data and handles over 4000 DICOM nodes.
Collapse
Affiliation(s)
- Jacek Kawa
- Radpoint Sp. z o.o., Ceglana 35, 40-514 Katowice, Poland
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800 Zabrze, Poland
| | - Bartłomiej Pyciński
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40, 41-800 Zabrze, Poland
| | | | - Paweł Bożek
- Radpoint Sp. z o.o., Ceglana 35, 40-514 Katowice, Poland
- Department of Radiology and Radiodiagnostics in Zabrze, Medical University of Silesia in Katowice, 3 Maja 13/15, 41-800 Zabrze, Poland
| | - Marek Kwasecki
- Radpoint Sp. z o.o., Ceglana 35, 40-514 Katowice, Poland
| | | | | |
Collapse
|
11
|
Dicoogle Open Source: The Establishment of a New Paradigm in Medical Imaging. J Med Syst 2022; 46:77. [PMID: 36201058 PMCID: PMC9535235 DOI: 10.1007/s10916-022-01867-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 09/16/2022] [Indexed: 11/10/2022]
Abstract
The rapid and continuous growth of data volume and its heterogeneity has become one of the most noticeable trends in healthcare, namely in medical imaging. This evolution led to the deployment of specialized information systems supported by the DICOM standard that enables the interoperability of distinct components, including imaging modalities, repositories, and visualization workstations. However, the complexity of these ecosystems leads to challenging learning curves and makes it time-consuming to mock and apply new ideas. Dicoogle is an extensible medical imaging archive server that emerges as a tool to overcome those challenges. Its extensible architecture allows the fast development of new advanced features or extends existent ones. It is currently a fundamental enabling technology in collaborative and telehealthcare environments, including research projects, screening programs, and teleradiology services. The framework is supported by a Learning Pack that includes a description of the web programmatic interface, a software development kit, documentation, and implementation samples. This article gives an in-depth view of the Dicoogle ecosystem, state-of-the-art contributions, and community impact. It starts by presenting an overview of its architectural concept, highlights some of the most representative research backed up by Dicoogle, some remarks obtained from its use in teaching, and worldwide usage statistics of the software. Finally, the positioning of Dicoogle in the medical imaging software field is discussed through comparison with other well-known solutions.
Collapse
|
12
|
Field M, I Thwaites D, Carolan M, Delaney GP, Lehmann J, Sykes J, Vinod S, Holloway L. Infrastructure platform for privacy-preserving distributed machine learning development of computer-assisted theragnostics in cancer. J Biomed Inform 2022; 134:104181. [PMID: 36055639 DOI: 10.1016/j.jbi.2022.104181] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 04/29/2022] [Accepted: 08/20/2022] [Indexed: 11/26/2022]
Abstract
INTRODUCTION Emerging evidence suggests that data-driven support tools have found their way into clinical decision-making in a number of areas, including cancer care. Improving them and widening their scope of availability in various differing clinical scenarios, including for prognostic models derived from retrospective data, requires co-ordinated data sharing between clinical centres, secondary analyses of large multi-institutional clinical trial data, or distributed (federated) learning infrastructures. A systematic approach to utilizing routinely collected data across cancer care clinics remains a significant challenge due to privacy, administrative and political barriers. METHODS An information technology infrastructure and web service software was developed and implemented which uses machine learning to construct clinical decision support systems in a privacy-preserving manner across datasets geographically distributed in different hospitals. The infrastructure was deployed in a network of Australian hospitals. A harmonized, international ontology-linked, set of lung cancer databases were built with the routine clinical and imaging data at each centre. The infrastructure was demonstrated with the development of logistic regression models to predict major cardiovascular events following radiation therapy. RESULTS The infrastructure implemented forms the basis of the Australian computer-assisted theragnostics (AusCAT) network for radiation oncology data extraction, reporting and distributed learning. Four radiation oncology departments (across seven hospitals) in New South Wales (NSW) participated in this demonstration study. Infrastructure was deployed at each centre and used to develop a model predicting for cardiovascular admission within a year of receiving curative radiotherapy for non-small cell lung cancer. A total of 10417 lung cancer patients were identified with 802 being eligible for the model. Twenty features were chosen for analysis from the clinical record and linked registries. After selection, 8 features were included and a logistic regression model achieved an area under the receiver operating characteristic (AUROC) curve of 0.70 and C-index of 0.65 on out-of-sample data. CONCLUSION The infrastructure developed was demonstrated to be usable in practice between clinical centres to harmonize routinely collected oncology data and develop models with federated learning. It provides a promising approach to enable further research studies in radiation oncology using real world clinical data.
Collapse
Affiliation(s)
- Matthew Field
- South Western Sydney Clinical Campus, School of Clinical Medicine, University of New South Wales, NSW, Australia; South Western Sydney Cancer Services, NSW Health, Sydney, NSW, Australia; Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia.
| | - David I Thwaites
- Institute of Medical Physics, School of Physics, University of Sydney, NSW, Australia
| | - Martin Carolan
- Illawarra Cancer Care Centre, Wollongong, NSW, Australia
| | - Geoff P Delaney
- South Western Sydney Clinical Campus, School of Clinical Medicine, University of New South Wales, NSW, Australia; South Western Sydney Cancer Services, NSW Health, Sydney, NSW, Australia; Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia
| | - Joerg Lehmann
- Institute of Medical Physics, School of Physics, University of Sydney, NSW, Australia; Department of Radiation Oncology, Calvary Mater Newcastle, NSW, Australia
| | - Jonathan Sykes
- Institute of Medical Physics, School of Physics, University of Sydney, NSW, Australia; Blacktown Haematology and Oncology Cancer Care Centre, Blacktown Hospital, Blacktown, NSW, Australia; Crown Princess Mary Cancer Centre, Westmead Hospital, Westmead, NSW, Australia
| | - Shalini Vinod
- South Western Sydney Clinical Campus, School of Clinical Medicine, University of New South Wales, NSW, Australia; South Western Sydney Cancer Services, NSW Health, Sydney, NSW, Australia; Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia
| | - Lois Holloway
- South Western Sydney Clinical Campus, School of Clinical Medicine, University of New South Wales, NSW, Australia; South Western Sydney Cancer Services, NSW Health, Sydney, NSW, Australia; Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia; Institute of Medical Physics, School of Physics, University of Sydney, NSW, Australia
| |
Collapse
|
13
|
Porzio M, Anam C. Real-time fully automated dosimetric computation for CT images in the clinical workflow: A feasibility study. Front Oncol 2022; 12:798460. [PMID: 36033538 PMCID: PMC9403986 DOI: 10.3389/fonc.2022.798460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Accepted: 07/19/2022] [Indexed: 11/13/2022] Open
Abstract
Background Currently, the volume computed tomography dose index (CTDIvol), the most-used quantity to express the output dose of a computed tomography (CT) patient’s dose, is not related to the real size and attenuation properties of each patient. The size-specific dose estimates (SSDE), based on the water-equivalent diameter (DW) overcome those issues. The proposed methods found in the literature do not allow real-time computation of DW and SSDE. Purpose This study aims to develop a software to compute DW and SSDE in a real-time clinical workflow. Method In total, 430 CT studies and scans of a water-filled funnel phantom were used to compute accuracy and evaluate the times required to compute the DW and SSDE. Two one-sided tests (TOST) equivalence test, Bland–Altman analysis, and bootstrap-based confidence interval estimations were used to evaluate the differences between actual diameter and DW computed automatically and between DW computed automatically and manually. Results The mean difference between the DW computed automatically and the actual water diameter for each slice is −0.027% with a TOST confidence interval equal to [−0.087%, 0.033%]. Bland–Altman bias is −0.009% [−0.016%, −0.001%] with lower limits of agreement (LoA) equal to −0.0010 [−0.094%, −0.068%] and upper LoA equal to 0.064% [0.051%, 0.077%]. The mean difference between DW computed automatically and manually is −0.014% with a TOST confidence interval equal to [−0.056%, 0.028%] on phantom and 0.41% with a TOST confidence interval equal to [0.358%, 0.462%] on real patients. The mean time to process a single image is 13.99 ms [13.69 ms, 14.30 ms], and the mean time to process an entire study is 11.5 s [10.62 s, 12.63 s]. Conclusion The system shows that it is possible to have highly accurate DW and SSDE in almost real-time without affecting the clinical workflow of CT examinations.
Collapse
Affiliation(s)
- Massimiliano Porzio
- Department of Fisica Sanitaria, Azienda Sanitaria Locale Cuneo1 (ASL CN1), Cuneo, Italy
- *Correspondence: Massimiliano Porzio,
| | - Choirul Anam
- Department of Physics, Faculty of Sciences and Mathematics, Diponegoro University, Semarang, Indonesia
| |
Collapse
|
14
|
Kirkove D, Barthelemy N, Coucke P, Mievis C, Ben Mustapha S, Jodogne S, Dardenne N, Donneau AF, Pétré B. [Feasibility study: The medical imaging as a tool for therapeutic education in radiotherapy]. Cancer Radiother 2022; 26:1034-1044. [PMID: 35843782 DOI: 10.1016/j.canrad.2022.04.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 03/18/2022] [Accepted: 04/01/2022] [Indexed: 10/17/2022]
Abstract
PURPOSE Assess the feasibility of a randomized controlled trial (RCT) exploring the use of medical imaging as a therapeutic education (TPE) intervention in external radiation therapy. MATERIALS AND METHODS Experimental feasibility trial of "RCT" type carried out in a single-center, between November 2019 and March 2020, following adult patients treated by thoracic radiotherapy. In addition to the information usually given, the experimental group benefited from an intervention consisting in the visualization of their own medical images using the open-source software "Stone of Orthanc". RESULTS Forty-nine patients were recruited with a refusal rate of 8.16% (4/49). 20 patients were withdrawn from the study for health reasons (COVID), 10 for medical reasons. All the remaining 15 participants completed the process. Although not significant, the experimental group showed a median gain in the perception of knowledge compared to the control group (+ 1.9 (1.6 - 2.2)) vs (+ 1.4 (1.4 - 1.8)), as well as a decrease in scores related to anxiety (- 3.0 (-4.5 - (-2.0)) vs - 1.0 (-5.0 - 0.0)) and emotional distress ((- 5.0 (- 7.5 - (- 3.5)) vs (- 2.0 (- 5.0 - (- 1.0)) A significant reduction (p=0.043) is observed for the depression score ((- 2.0 (-3.0 - (-1.5)) vs (0.0 (0.0 - 0.0)). CONCLUSION This study demonstrates the feasibility of the project, with promising preliminary results. Some adaptations in order to conduct a larger-scale RCT are highlighted.
Collapse
Affiliation(s)
- D Kirkove
- Département des Sciences de la Santé Publique, Université de Liège, B23/Avenue Hippocrate, n(o) 13, 4000 Liège, Belgique.
| | - N Barthelemy
- Service Equipe mobile de soins palliatifs, Oncologie Radiothérapie, Centre Hospitalier Universitaire de Liège, Domaine Universitaire du Sart Tilman/B35, 4000 Liège, Belgique
| | - P Coucke
- Département de Physique Médicale, Service médical de radiothérapie, Centre Hospitalier Universitaire de Liège, Domaine Universitaire du Sart Tilman/B35, 4000 Liège, Belgique
| | - C Mievis
- Service Equipe mobile de soins palliatifs, Oncologie Radiothérapie, Centre Hospitalier Universitaire de Liège, Domaine Universitaire du Sart Tilman/B35, 4000 Liège, Belgique
| | - S Ben Mustapha
- Service Equipe mobile de soins palliatifs, Oncologie Radiothérapie, Centre Hospitalier Universitaire de Liège, Domaine Universitaire du Sart Tilman/B35, 4000 Liège, Belgique
| | - S Jodogne
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics, Louvain School of Engineering, 1348 Louvain-la-Neuve, Belgique
| | - N Dardenne
- Département des Sciences de la Santé Publique, Université de Liège, B23/Avenue Hippocrate, n(o) 13, 4000 Liège, Belgique; Centre Hospitalo-Universitaire Biostatistique et méthodes de recherche (B-STAT), Faculté de Médecine, Université de Liège, B23/Avenue Hippocrate, n°13, 4000 Liège, Belgique
| | - A-F Donneau
- Département des Sciences de la Santé Publique, Université de Liège, B23/Avenue Hippocrate, n(o) 13, 4000 Liège, Belgique; Centre Hospitalo-Universitaire Biostatistique et méthodes de recherche (B-STAT), Faculté de Médecine, Université de Liège, B23/Avenue Hippocrate, n°13, 4000 Liège, Belgique
| | - B Pétré
- Département des Sciences de la Santé Publique, Université de Liège, B23/Avenue Hippocrate, n(o) 13, 4000 Liège, Belgique
| |
Collapse
|
15
|
Garau N, Orro A, Summers P, De Maria L, Bertolotti R, Bassis D, Minotti M, De Fiori E, Baroni G, Paganelli C, Rampinelli C. Integrating Biological and Radiological Data in a Structured Repository: a Data Model Applied to the COSMOS Case Study. J Digit Imaging 2022; 35:970-982. [PMID: 35296941 PMCID: PMC9485502 DOI: 10.1007/s10278-022-00615-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 02/17/2022] [Accepted: 02/28/2022] [Indexed: 11/29/2022] Open
Abstract
Integrating the information coming from biological samples with digital data, such as medical images, has gained prominence with the advent of precision medicine. Research in this field faces an ever-increasing amount of data to manage and, as a consequence, the need to structure these data in a functional and standardized fashion to promote and facilitate cooperation among institutions. Inspired by the Minimum Information About BIobank data Sharing (MIABIS), we propose an extended data model which aims to standardize data collections where both biological and digital samples are involved. In the proposed model, strong emphasis is given to the cause-effect relationships among factors as these are frequently encountered in clinical workflows. To test the data model in a realistic context, we consider the Continuous Observation of SMOking Subjects (COSMOS) dataset as case study, consisting of 10 consecutive years of lung cancer screening and follow-up on more than 5000 subjects. The structure of the COSMOS database, implemented to facilitate the process of data retrieval, is therefore presented along with a description of data that we hope to share in a public repository for lung cancer screening research.
Collapse
Affiliation(s)
- Noemi Garau
- Dipartimento Di Elettronica, Informazione E Bioingegneria, Politecnico Di Milano, Milano, Italy. .,Division of Radiology, IEO, European Institute of Oncology IRCCS, Milan, Italy.
| | - Alessandro Orro
- Institute for Biomedical Technologies, National Research Council (ITB-CNR), Segrate, Italy
| | - Paul Summers
- Division of Radiology, IEO, European Institute of Oncology IRCCS, Milan, Italy
| | - Lorenza De Maria
- Division of Radiology, IEO, European Institute of Oncology IRCCS, Milan, Italy
| | - Raffaella Bertolotti
- Division of Data Management, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Danny Bassis
- School of Medicine, University of Milan, Milan, Italy
| | - Marta Minotti
- Division of Radiology, IEO, European Institute of Oncology IRCCS, Milan, Italy
| | - Elvio De Fiori
- Division of Radiology, IEO, European Institute of Oncology IRCCS, Milan, Italy
| | - Guido Baroni
- Dipartimento Di Elettronica, Informazione E Bioingegneria, Politecnico Di Milano, Milano, Italy.,Bioengineering Unit, CNAO Foundation, Pavia, Italy
| | - Chiara Paganelli
- Dipartimento Di Elettronica, Informazione E Bioingegneria, Politecnico Di Milano, Milano, Italy
| | | |
Collapse
|
16
|
Calisto FM, Santiago C, Nunes N, Nascimento JC. BreastScreening-AI: Evaluating medical intelligent agents for human-AI interactions. Artif Intell Med 2022; 127:102285. [DOI: 10.1016/j.artmed.2022.102285] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/15/2022] [Accepted: 03/21/2022] [Indexed: 01/19/2023]
|
17
|
Doran SJ, Al Sa’d M, Petts JA, Darcy J, Alpert K, Cho W, Sanchez LE, Alle S, El Harouni A, Genereaux B, Ziegler E, Harris GJ, Aboagye EO, Sala E, Koh DM, Marcus D. Integrating the OHIF Viewer into XNAT: Achievements, Challenges and Prospects for Quantitative Imaging Studies. Tomography 2022; 8:497-512. [PMID: 35202205 PMCID: PMC8875191 DOI: 10.3390/tomography8010040] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 02/03/2022] [Accepted: 02/05/2022] [Indexed: 11/16/2022] Open
Abstract
Purpose: XNAT is an informatics software platform to support imaging research, particularly in the context of large, multicentre studies of the type that are essential to validate quantitative imaging biomarkers. XNAT provides import, archiving, processing and secure distribution facilities for image and related study data. Until recently, however, modern data visualisation and annotation tools were lacking on the XNAT platform. We describe the background to, and implementation of, an integration of the Open Health Imaging Foundation (OHIF) Viewer into the XNAT environment. We explain the challenges overcome and discuss future prospects for quantitative imaging studies. Materials and methods: The OHIF Viewer adopts an approach based on the DICOM web protocol. To allow operation in an XNAT environment, a data-routing methodology was developed to overcome the mismatch between the DICOM and XNAT information models and a custom viewer panel created to allow navigation within the viewer between different XNAT projects, subjects and imaging sessions. Modifications to the development environment were made to allow developers to test new code more easily against a live XNAT instance. Major new developments focused on the creation and storage of regions-of-interest (ROIs) and included: ROI creation and editing tools for both contour- and mask-based regions; a "smart CT" paintbrush tool; the integration of NVIDIA's Artificial Intelligence Assisted Annotation (AIAA); the ability to view surface meshes, fractional segmentation maps and image overlays; and a rapid image reader tool aimed at radiologists. We have incorporated the OHIF microscopy extension and, in parallel, introduced support for microscopy session types within XNAT for the first time. Results: Integration of the OHIF Viewer within XNAT has been highly successful and numerous additional and enhanced tools have been created in a programme started in 2017 that is still ongoing. The software has been downloaded more than 3700 times during the course of the development work reported here, demonstrating the impact of the work. Conclusions: The OHIF open-source, zero-footprint web viewer has been incorporated into the XNAT platform and is now used at many institutions worldwide. Further innovations are envisaged in the near future.
Collapse
Affiliation(s)
- Simon J. Doran
- Division of Radiotherapy and Imaging, Institute of Cancer Research, 15 Cotswold Rd, London SM2 5NG, UK;
- CRUK National Cancer Imaging Translational Accelerator, UK; (M.A.S.); (L.E.S.); (E.O.A.); (E.S.); (D.-M.K.)
| | - Mohammad Al Sa’d
- CRUK National Cancer Imaging Translational Accelerator, UK; (M.A.S.); (L.E.S.); (E.O.A.); (E.S.); (D.-M.K.)
- Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College, London SW7 2AZ, UK
| | - James A. Petts
- Ovela Solutions Ltd., 20-22 Wenlock Road, London N1 7GU, UK;
| | - James Darcy
- Division of Radiotherapy and Imaging, Institute of Cancer Research, 15 Cotswold Rd, London SM2 5NG, UK;
- CRUK National Cancer Imaging Translational Accelerator, UK; (M.A.S.); (L.E.S.); (E.O.A.); (E.S.); (D.-M.K.)
| | - Kate Alpert
- Flywheel LLC, 1015 Glenwood Ave, Suite 300, Minneapolis, MN 55405, USA; (K.A.); (D.M.)
| | - Woonchan Cho
- Neuroimaging Informatics Analysis Center, Washington University School of Medicine, 660 S Euclid Ave, St. Louis, MO 63110, USA;
| | - Lorena Escudero Sanchez
- CRUK National Cancer Imaging Translational Accelerator, UK; (M.A.S.); (L.E.S.); (E.O.A.); (E.S.); (D.-M.K.)
- Department of Radiology, University of Cambridge, Hills Rd, Cambridge CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, University of Cambridge Li Ka Shing Centre, Robinson Way, Cambridge CB2 0RE, UK
| | - Sachidanand Alle
- NVIDIA, 2788 San Tomas Expressway, Santa Clara, CA 95051, USA; (S.A.); (A.E.H.); (B.G.)
| | - Ahmed El Harouni
- NVIDIA, 2788 San Tomas Expressway, Santa Clara, CA 95051, USA; (S.A.); (A.E.H.); (B.G.)
| | - Brad Genereaux
- NVIDIA, 2788 San Tomas Expressway, Santa Clara, CA 95051, USA; (S.A.); (A.E.H.); (B.G.)
| | - Erik Ziegler
- Open Health Imaging Foundation, Massachusetts General Hospital, 55 Fruit St., Boston, MA 02114, USA; (E.Z.); (G.J.H.)
- Radical Imaging LLC, 188 Annie Moore Rd, Bolton, MA 01740-1140, USA
| | - Gordon J. Harris
- Open Health Imaging Foundation, Massachusetts General Hospital, 55 Fruit St., Boston, MA 02114, USA; (E.Z.); (G.J.H.)
- Department of Radiology, Massachusetts General Hospital, 55 Fruit St., Boston, MA 02114, USA
- Harvard Medical School, 25 Shattuck St., Boston, MA 02115, USA
| | - Eric O. Aboagye
- CRUK National Cancer Imaging Translational Accelerator, UK; (M.A.S.); (L.E.S.); (E.O.A.); (E.S.); (D.-M.K.)
- Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College, London SW7 2AZ, UK
| | - Evis Sala
- CRUK National Cancer Imaging Translational Accelerator, UK; (M.A.S.); (L.E.S.); (E.O.A.); (E.S.); (D.-M.K.)
- Department of Radiology, University of Cambridge, Hills Rd, Cambridge CB2 0QQ, UK
- Cancer Research UK Cambridge Centre, University of Cambridge Li Ka Shing Centre, Robinson Way, Cambridge CB2 0RE, UK
| | - Dow-Mu Koh
- CRUK National Cancer Imaging Translational Accelerator, UK; (M.A.S.); (L.E.S.); (E.O.A.); (E.S.); (D.-M.K.)
- Department of Radiology, Royal Marsden Hospital, Downs Rd, Sutton SM2 5PT, UK
| | - Dan Marcus
- Flywheel LLC, 1015 Glenwood Ave, Suite 300, Minneapolis, MN 55405, USA; (K.A.); (D.M.)
- Neuroimaging Informatics Analysis Center, Washington University School of Medicine, 660 S Euclid Ave, St. Louis, MO 63110, USA;
| |
Collapse
|
18
|
Yi T, Pan I, Collins S, Chen F, Cueto R, Hsieh B, Hsieh C, Smith JL, Yang L, Liao WH, Merck LH, Bai H, Merck D. DICOM Image ANalysis and Archive (DIANA): an Open-Source System for Clinical AI Applications. J Digit Imaging 2021; 34:1405-1413. [PMID: 34727303 PMCID: PMC8669082 DOI: 10.1007/s10278-021-00488-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 02/22/2021] [Accepted: 04/19/2021] [Indexed: 12/26/2022] Open
Abstract
In the era of data-driven medicine, rapid access and accurate interpretation of medical images are becoming increasingly important. The DICOM Image ANalysis and Archive (DIANA) system is an open-source, lightweight, and scalable Python interface that enables users to interact with hospital Picture Archiving and Communications Systems (PACS) to access such data. In this work, DIANA functionality was detailed and evaluated in the context of retrospective PACS data retrieval and two prospective clinical artificial intelligence (AI) pipelines: bone age (BA) estimation and intra-cranial hemorrhage (ICH) detection. DIANA orchestrates activity beginning with post-acquisition study discovery and ending with online notifications of findings. For AI applications, system latency (exam completion to system report time) was quantified and compared to that of clinicians (exam completion to initial report creation time). Mean DIANA latency was 9.04 ± 3.83 and 20.17 ± 10.16 min compared to clinician latency of 51.52 ± 58.9 and 65.62 ± 110.39 min for BA and ICH, respectively, with DIANA latencies being significantly lower (p < 0.001). DIANA's capabilities were also explored and found effective in retrieving and anonymizing protected health information for "big-data" medical imaging research and analysis. Mean per-image retrieval times were 1.12 ± 0.50 and 0.08 ± 0.01 s across x-ray and computed tomography studies, respectively. The data herein demonstrate that DIANA can flexibly integrate into existing hospital infrastructure and improve the process by which researchers/clinicians access imaging repository data. This results in a simplified workflow for large data retrieval and clinical integration of AI models.
Collapse
Affiliation(s)
- Thomas Yi
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
- Warren Alpert Medical School of Brown University, Providence, RI, USA
| | - Ian Pan
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
- Warren Alpert Medical School of Brown University, Providence, RI, USA
| | - Scott Collins
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
| | - Fiona Chen
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
- Warren Alpert Medical School of Brown University, Providence, RI, USA
| | | | - Ben Hsieh
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
| | - Celina Hsieh
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
- Warren Alpert Medical School of Brown University, Providence, RI, USA
| | - Jessica L Smith
- Department of Emergency Medicine, Rhode Island Hospital, Providence, RI, USA
| | - Li Yang
- Department of Neurology, Second Xiangya Hospital, Changsha, China
| | - Wei-Hua Liao
- Department of Radiology, Xiangya Hospital, Changsha, China
| | - Lisa H Merck
- Warren Alpert Medical School of Brown University, Providence, RI, USA
- Department of Emergency Medicine, Rhode Island Hospital, Providence, RI, USA
- University of Florida, Gainesville, FL, USA
| | - Harrison Bai
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA.
- Warren Alpert Medical School of Brown University, Providence, RI, USA.
| | - Derek Merck
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
- Warren Alpert Medical School of Brown University, Providence, RI, USA
- University of Florida, Gainesville, FL, USA
| |
Collapse
|
19
|
Spicher N, Schweins M, Thielecke L, Kurner T, Deserno TM. Feasibility Analysis of Fifth-generation (5G) Mobile Networks for Transmission of Medical Imaging Data. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:1791-1795. [PMID: 34891634 DOI: 10.1109/embc46164.2021.9629615] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Next to higher data rates and lower latency, the upcoming fifth-generation mobile network standard will introduce a new service ecosystem. Concepts such as multi-access edge computing or network slicing will enable tailoring service level requirements to specific use-cases. In medical imaging, researchers and clinicians are currently working towards higher portability of scanners. This includes i) small scanners to be wheeled inside the hospital to the bedside and ii) conventional scanners provided via trucks to remote areas. Both use-cases introduce the need for mobile networks adhering to high safety standards and providing high data rates. These requirements could be met by fifth-generation mobile networks. In this work, we analyze the feasibility of transferring medical imaging data using the current state of development of fifth-generation mobile networks (3GPP Release 15). We demonstrate the potential of reaching 100Mbit/s upload rates using already available consumer-grade hardware. Furthermore, we show an effective average data throughput of 50Mbit/s when transferring medical images using out-of-the-box open-source software based on the Digital Imaging and Communications in Medicine (DICOM) standard. During transmissions, we sample the radio frequency bands to analyse the characteristics of the mobile radio network. Additionally, we discuss the potential of new features such as network slicing that will be introduced in forthcoming releases.
Collapse
|
20
|
Deep learning-based transformation of H&E stained tissues into special stains. Nat Commun 2021; 12:4884. [PMID: 34385460 PMCID: PMC8361203 DOI: 10.1038/s41467-021-25221-2] [Citation(s) in RCA: 127] [Impact Index Per Article: 31.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Accepted: 07/29/2021] [Indexed: 11/08/2022] Open
Abstract
Pathology is practiced by visual inspection of histochemically stained tissue slides. While the hematoxylin and eosin (H&E) stain is most commonly used, special stains can provide additional contrast to different tissue components. Here, we demonstrate the utility of supervised learning-based computational stain transformation from H&E to special stains (Masson's Trichrome, periodic acid-Schiff and Jones silver stain) using kidney needle core biopsy tissue sections. Based on the evaluation by three renal pathologists, followed by adjudication by a fourth pathologist, we show that the generation of virtual special stains from existing H&E images improves the diagnosis of several non-neoplastic kidney diseases, sampled from 58 unique subjects (P = 0.0095). A second study found that the quality of the computationally generated special stains was statistically equivalent to those which were histochemically stained. This stain-to-stain transformation framework can improve preliminary diagnoses when additional special stains are needed, also providing significant savings in time and cost.
Collapse
|
21
|
Cherian Kurian N, Sethi A, Reddy Konduru A, Mahajan A, Rane SU. A 2021 update on cancer image analytics with deep learning. WIRES DATA MINING AND KNOWLEDGE DISCOVERY 2021; 11. [DOI: 10.1002/widm.1410] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Accepted: 03/09/2021] [Indexed: 02/05/2023]
Abstract
AbstractDeep learning (DL)‐based interpretation of medical images has reached a critical juncture of expanding outside research projects into translational ones, and is ready to make its way to the clinics. Advances over the last decade in data availability, DL techniques, as well as computing capabilities have accelerated this journey. Through this journey, today we have a better understanding of the challenges to and pitfalls of wider adoption of DL into clinical care, which, according to us, should and will drive the advances in this field in the next few years. The most important among these challenges are the lack of an appropriately digitized environment within healthcare institutions, the lack of adequate open and representative datasets on which DL algorithms can be trained and tested, and the lack of robustness of widely used DL training algorithms to certain pervasive pathological characteristics of medical images and repositories. In this review, we provide an overview of the role of imaging in oncology, the different techniques that are shaping the way DL algorithms are being made ready for clinical use, and also the problems that DL techniques still need to address before DL can find a home in clinics. Finally, we also provide a summary of how DL can potentially drive the adoption of digital pathology, vendor neutral archives, and picture archival and communication systems. We caution that the respective researchers may find the coverage of their own fields to be at a high‐level. This is so by design as this format is meant to only introduce those looking in from outside of deep learning and medical research, respectively, to gain an appreciation for the main concerns and limitations of these two fields instead of telling them something new about their own.This article is categorized under:
Technologies > Artificial Intelligence
Algorithmic Development > Biological Data Mining
Collapse
Affiliation(s)
- Nikhil Cherian Kurian
- Department of Electrical Engineering Indian Institute of Technology, Bombay Mumbai India
| | - Amit Sethi
- Department of Electrical Engineering Indian Institute of Technology, Bombay Mumbai India
| | - Anil Reddy Konduru
- Department of Pathology Tata Memorial Center‐ACTREC, HBNI Navi Mumbai India
| | - Abhishek Mahajan
- Department of Radiology Tata Memorial Hospital, HBNI Mumbai India
| | - Swapnil Ulhas Rane
- Department of Pathology Tata Memorial Center‐ACTREC, HBNI Navi Mumbai India
| |
Collapse
|
22
|
Witowski J, Choi J, Jeon S, Kim D, Chung J, Conklin J, Longo MGF, Succi MD, Do S. MarkIt: A Collaborative Artificial Intelligence Annotation Platform Leveraging Blockchain For Medical Imaging Research. BLOCKCHAIN IN HEALTHCARE TODAY 2021; 4:176. [PMID: 36777485 PMCID: PMC9907418 DOI: 10.30953/bhty.v4.176] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 03/24/2021] [Accepted: 03/24/2021] [Indexed: 06/18/2023]
Abstract
Current research on medical image processing relies heavily on the amount and quality of input data. Specifically, supervised machine learning methods require well-annotated datasets. A lack of annotation tools limits the potential to achieve high-volume processing and scaled systems with a proper reward mechanism. We developed MarkIt, a web-based tool, for collaborative annotation of medical imaging data with artificial intelligence and blockchain technologies. Our platform handles both Digital Imaging and Communications in Medicine (DICOM) and non-DICOM images, and allows users to annotate them for classification and object detection tasks in an efficient manner. MarkIt can accelerate the annotation process and keep track of user activities to calculate a fair reward. A proof-of-concept experiment was conducted with three fellowship-trained radiologists, each of whom annotated 1,000 chest X-ray studies for multi-label classification. We calculated the inter-rater agreement and estimated the value of the dataset to distribute the reward for annotators using a crypto currency. We hypothesize that MarkIt allows the typically arduous annotation task to become more efficient. In addition, MarkIt can serve as a platform to evaluate the value of data and trade the annotation results in a more scalable manner in the future. The platform is publicly available for testing on https://markit.mgh.harvard.edu.
Collapse
Affiliation(s)
- Jan Witowski
- Laboratory of Medical Imaging and Computation, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Jongmun Choi
- Laboratory of Medical Imaging and Computation, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Soomin Jeon
- Laboratory of Medical Imaging and Computation, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Doyun Kim
- Laboratory of Medical Imaging and Computation, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Joowon Chung
- Laboratory of Medical Imaging and Computation, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - John Conklin
- Division of Emergency Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Maria Gabriela Figueiro Longo
- Division of Emergency Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Marc D. Succi
- Medically Engineered Solutions in Healthcare (MESH) Incubator, Massachusetts General Hospital, Boston, MA, USA
| | - Synho Do
- Laboratory of Medical Imaging and Computation, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| |
Collapse
|
23
|
Gu Q, Prodduturi N, Jiang J, Flotte TJ, Hart SN. Dicom_wsi: A Python Implementation for Converting Whole-Slide Images to Digital Imaging and Communications in Medicine Compliant Files. J Pathol Inform 2021; 12:21. [PMID: 34267986 PMCID: PMC8274303 DOI: 10.4103/jpi.jpi_88_20] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 01/19/2021] [Accepted: 02/27/2021] [Indexed: 12/23/2022] Open
Abstract
Background: Adoption of the Digital Imaging and Communications in Medicine (DICOM) standard for whole slide images (WSIs) has been slow, despite significant time and effort by standards curators. One reason for the lack of adoption is that there are few tools which exist that can meet the requirements of WSIs, given an evolving ecosystem of best practices for implementation. Eventually, vendors will conform to the specification to ensure enterprise interoperability, but what about archived slides? Millions of slides have been scanned in various proprietary formats, many with examples of rare histologies. Our hypothesis is that if users and developers had access to easy to use tools for migrating proprietary formats to the open DICOM standard, then more tools would be developed as DICOM first implementations. Methods: The technology we present here is dicom_wsi, a Python based toolkit for converting any slide capable of being read by the OpenSlide library into DICOM conformant and validated implementations. Moreover, additional postprocessing such as background removal, digital transformations (e.g., ink removal), and annotation storage are also described. dicom_wsi is a free and open source implementation that anyone can use or modify to meet their specific purposes. Results: We compare the output of dicom_wsi to two other existing implementations of WSI to DICOM converters and also validate the images using DICOM capable image viewers. Conclusion: dicom_wsi represents the first step in a long process of DICOM adoption for WSI. It is the first open source implementation released in the developer friendly Python programming language and can be freely downloaded at .
Collapse
Affiliation(s)
- Qiangqiang Gu
- Department of Health Sciences Research, Division of Biomedical Statistics and Informatics, Mayo College of Medicine, Rochester, Minnesota, USA
| | - Naresh Prodduturi
- Department of Health Sciences Research, Division of Biomedical Statistics and Informatics, Mayo College of Medicine, Rochester, Minnesota, USA
| | - Jun Jiang
- Department of Health Sciences Research, Division of Biomedical Statistics and Informatics, Mayo College of Medicine, Rochester, Minnesota, USA
| | - Thomas J Flotte
- Department of Laboratory Medicine and Pathology, Mayo College of Medicine, Rochester, Minnesota, USA
| | - Steven N Hart
- Department of Health Sciences Research, Division of Biomedical Statistics and Informatics, Mayo College of Medicine, Rochester, Minnesota, USA
| |
Collapse
|
24
|
Mantri M, Taran S, Sunder G. DICOM Integration Libraries for Medical Image Interoperability: A Technical Review. IEEE Rev Biomed Eng 2020; 15:247-259. [PMID: 33275586 DOI: 10.1109/rbme.2020.3042642] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Medical images support diagnostic care and research in medicine. The acquisition and availability of medical images in digital form can facilitate quick diagnosis, ease of access, continuity of care, analysis and contribute to modern medical research. Digital Imaging and Communications in Medicine (DICOM) is a universal standard that promises standardized representation and exchange of medical images and related information from various radiological and waveform sources. DICOM software development kits or tools or libraries make it easier to implement DICOM standard in healthcare applications. There are several such API libraries available from different providers that promise DICOM integration. In this paper, we explore available DICOM API libraries and conduct a comparative study between a set of selected libraries on the four major criteria (DICOM features, technical aspects, the robustness of the library, and the level of user support available). The aim is to provide a complete picture of options available that can help in finding a best-fit open-source DICOM standard integration API library for developing standardized and interoperable healthcare applications.
Collapse
|
25
|
Elbers DC, Fillmore NR, Sung FC, Ganas SS, Prokhorenkov A, Meyer C, Hall RB, Ajjarapu SJ, Chen DC, Meng F, Grossman RL, Brophy MT, Do NV. The Veterans Affairs Precision Oncology Data Repository, a Clinical, Genomic, and Imaging Research Database. PATTERNS 2020; 1:100083. [PMID: 33205130 PMCID: PMC7660389 DOI: 10.1016/j.patter.2020.100083] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 06/15/2020] [Accepted: 07/10/2020] [Indexed: 02/06/2023]
Abstract
The Veterans Affairs Precision Oncology Data Repository (VA-PODR) is a large, nationwide repository of de-identified data on patients diagnosed with cancer at the Department of Veterans Affairs (VA). Data include longitudinal clinical data from the VA's nationwide electronic health record system and the VA Central Cancer Registry, targeted tumor sequencing data, and medical imaging data including computed tomography (CT) scans and pathology slides. A subset of the repository is available at the Genomic Data Commons (GDC) and The Cancer Imaging Archive (TCIA), and the full repository is available through the Veterans Precision Oncology Data Commons (VPODC). By releasing this de-identified dataset, we aim to advance Veterans' health care through enabling translational research on the Veteran population by a wide variety of researchers.
Collapse
Affiliation(s)
- Danne C Elbers
- VA Cooperative Studies Program, VA Boston Healthcare System (151MAV), 150 S. Huntington Ave, Jamaica Plain, MA 02130, USA.,University of Vermont, Complex Systems Center, Burlington, VT 05405, USA
| | - Nathanael R Fillmore
- VA Cooperative Studies Program, VA Boston Healthcare System (151MAV), 150 S. Huntington Ave, Jamaica Plain, MA 02130, USA.,Harvard Medical School, Boston, MA 02115, USA.,Dana-Farber Cancer Institute, Boston, MA 02215, USA
| | - Feng-Chi Sung
- VA Cooperative Studies Program, VA Boston Healthcare System (151MAV), 150 S. Huntington Ave, Jamaica Plain, MA 02130, USA
| | - Spyridon S Ganas
- VA Cooperative Studies Program, VA Boston Healthcare System (151MAV), 150 S. Huntington Ave, Jamaica Plain, MA 02130, USA
| | - Andrew Prokhorenkov
- University of Chicago, Center for Data Intensive Science, Chicago, IL 60615, USA
| | - Christopher Meyer
- University of Chicago, Center for Data Intensive Science, Chicago, IL 60615, USA
| | - Robert B Hall
- VA Cooperative Studies Program, VA Boston Healthcare System (151MAV), 150 S. Huntington Ave, Jamaica Plain, MA 02130, USA
| | - Samuel J Ajjarapu
- VA Cooperative Studies Program, VA Boston Healthcare System (151MAV), 150 S. Huntington Ave, Jamaica Plain, MA 02130, USA.,Dana-Farber Cancer Institute, Boston, MA 02215, USA
| | - Daniel C Chen
- VA Cooperative Studies Program, VA Boston Healthcare System (151MAV), 150 S. Huntington Ave, Jamaica Plain, MA 02130, USA.,Boston University School of Medicine, Boston, MA 02118, USA
| | - Frank Meng
- VA Cooperative Studies Program, VA Boston Healthcare System (151MAV), 150 S. Huntington Ave, Jamaica Plain, MA 02130, USA.,Boston University School of Medicine, Boston, MA 02118, USA
| | - Robert L Grossman
- University of Chicago, Center for Data Intensive Science, Chicago, IL 60615, USA
| | - Mary T Brophy
- VA Cooperative Studies Program, VA Boston Healthcare System (151MAV), 150 S. Huntington Ave, Jamaica Plain, MA 02130, USA.,Boston University School of Medicine, Boston, MA 02118, USA
| | - Nhan V Do
- VA Cooperative Studies Program, VA Boston Healthcare System (151MAV), 150 S. Huntington Ave, Jamaica Plain, MA 02130, USA.,Boston University School of Medicine, Boston, MA 02118, USA
| |
Collapse
|
26
|
The American Association for the Surgery of Trauma renal injury grading scale: Implications of the 2018 revisions for injury reclassification and predicting bleeding interventions. J Trauma Acute Care Surg 2020; 88:357-365. [PMID: 31876692 DOI: 10.1097/ta.0000000000002572] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
BACKGROUND In 2018, the American Association for the Surgery of Trauma (AAST) published revisions to the renal injury grading system to reflect the increased reliance on computed tomography scans and non-operative management of high-grade renal trauma (HGRT). We aimed to evaluate how these revisions will change the grading of HGRT and if it outperforms the original 1989 grading in predicting bleeding control interventions. METHODS Data on HGRT were collected from 14 Level-1 trauma centers from 2014 to 2017. Patients with initial computed tomography scans were included. Two radiologists reviewed the scans to regrade the injuries according to the 1989 and 2018 AAST grading systems. Descriptive statistics were used to assess grade reclassifications. Mixed-effect multivariable logistic regression was used to measure the predictive ability of each grading system. The areas under the curves were compared. RESULTS Of the 322 injuries included, 27.0% were upgraded, 3.4% were downgraded, and 69.5% remained unchanged. Of the injuries graded as III or lower using the 1989 AAST, 33.5% were upgraded to grade IV using the 2018 AAST. Of the grade V injuries, 58.8% were downgraded using the 2018 AAST. There was no statistically significant difference in the overall areas under the curves between the 2018 and 1989 AAST grading system for predicting bleeding interventions (0.72 vs. 0.68, p = 0.34). CONCLUSION About one third of the injuries previously classified as grade III will be upgraded to grade IV using the 2018 AAST, which adds to the heterogeneity of grade IV injuries. Although the 2018 AAST grading provides more anatomic details on injury patterns and includes important radiologic findings, it did not outperform the 1989 AAST grading in predicting bleeding interventions. LEVEL OF EVIDENCE Prognostic and Epidemiological Study, level III.
Collapse
|
27
|
Sohn JH, Chillakuru YR, Lee S, Lee AY, Kelil T, Hess CP, Seo Y, Vu T, Joe BN. An Open-Source, Vender Agnostic Hardware and Software Pipeline for Integration of Artificial Intelligence in Radiology Workflow. J Digit Imaging 2020; 33:1041-1046. [PMID: 32468486 PMCID: PMC7522128 DOI: 10.1007/s10278-020-00348-8] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
Although machine learning (ML) has made significant improvements in radiology, few algorithms have been integrated into clinical radiology workflow. Complex radiology IT environments and Picture Archiving and Communication System (PACS) pose unique challenges in creating a practical ML schema. However, clinical integration and testing are critical to ensuring the safety and accuracy of ML algorithms. This study aims to propose, develop, and demonstrate a simple, efficient, and understandable hardware and software system for integrating ML models into the standard radiology workflow and PACS that can serve as a framework for testing ML algorithms. A Digital Imaging and Communications in Medicine/Graphics Processing Unit (DICOM/GPU) server and software pipeline was established at a metropolitan county hospital intranet to demonstrate clinical integration of ML algorithms in radiology. A clinical ML integration schema, agnostic to the hospital IT system and specific ML models/frameworks, was implemented and tested with a breast density classification algorithm and prospectively evaluated for time delays using 100 digital 2D mammograms. An open-source clinical ML integration schema was successfully implemented and demonstrated. This schema allows for simple uploading of custom ML models. With the proposed setup, the ML pipeline took an average of 26.52 s per second to process a batch of 100 studies. The most significant processing time delays were noted in model load and study stability times. The code is made available at " http://bit.ly/2Z121hX ". We demonstrated the feasibility to deploy and utilize ML models in radiology without disrupting existing radiology workflow.
Collapse
Affiliation(s)
- Jae Ho Sohn
- Radiology and Biomedical Imaging, University of California San Francisco (UCSF), 505 Parnassus Ave, San Francisco, CA, 94143, USA.
| | - Yeshwant Reddy Chillakuru
- Radiology and Biomedical Imaging, University of California San Francisco (UCSF), 505 Parnassus Ave, San Francisco, CA, 94143, USA
- School of Medicine and Health Sciences, The George Washington University, Washington, DC, 20037, USA
| | - Stanley Lee
- Radiology and Biomedical Imaging, University of California San Francisco (UCSF), 505 Parnassus Ave, San Francisco, CA, 94143, USA
| | - Amie Y Lee
- Radiology and Biomedical Imaging, University of California San Francisco (UCSF), 505 Parnassus Ave, San Francisco, CA, 94143, USA
| | - Tatiana Kelil
- Radiology and Biomedical Imaging, University of California San Francisco (UCSF), 505 Parnassus Ave, San Francisco, CA, 94143, USA
| | - Christopher Paul Hess
- Radiology and Biomedical Imaging, University of California San Francisco (UCSF), 505 Parnassus Ave, San Francisco, CA, 94143, USA
| | - Youngho Seo
- Radiology and Biomedical Imaging, University of California San Francisco (UCSF), 505 Parnassus Ave, San Francisco, CA, 94143, USA
| | - Thienkhai Vu
- Radiology and Biomedical Imaging, University of California San Francisco (UCSF), 505 Parnassus Ave, San Francisco, CA, 94143, USA
| | - Bonnie N Joe
- Radiology and Biomedical Imaging, University of California San Francisco (UCSF), 505 Parnassus Ave, San Francisco, CA, 94143, USA
| |
Collapse
|
28
|
Panayides AS, Amini A, Filipovic ND, Sharma A, Tsaftaris SA, Young A, Foran D, Do N, Golemati S, Kurc T, Huang K, Nikita KS, Veasey BP, Zervakis M, Saltz JH, Pattichis CS. AI in Medical Imaging Informatics: Current Challenges and Future Directions. IEEE J Biomed Health Inform 2020; 24:1837-1857. [PMID: 32609615 PMCID: PMC8580417 DOI: 10.1109/jbhi.2020.2991043] [Citation(s) in RCA: 160] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine.
Collapse
|
29
|
Demirer M, Candemir S, Bigelow MT, Yu SM, Gupta V, Prevedello LM, White RD, Yu JS, Grimmer R, Wels M, Wimmer A, Halabi AH, Ihsani A, O'Donnell TP, Erdal BS. A User Interface for Optimizing Radiologist Engagement in Image Data Curation for Artificial Intelligence. Radiol Artif Intell 2019; 1:e180095. [PMID: 33937804 DOI: 10.1148/ryai.2019180095] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Revised: 06/14/2019] [Accepted: 06/25/2019] [Indexed: 11/11/2022]
Abstract
Purpose To delineate image data curation needs and describe a locally designed graphical user interface (GUI) to aid radiologists in image annotation for artificial intelligence (AI) applications in medical imaging. Materials and Methods GUI components support image analysis toolboxes, picture archiving and communication system integration, third-party applications, processing of scripting languages, and integration of deep learning libraries. For clinical AI applications, GUI components included two-dimensional segmentation and classification; three-dimensional segmentation and quantification; and three-dimensional segmentation, quantification, and classification. To assess radiologist engagement and performance efficiency associated with GUI-related capabilities, image annotation rate (studies per day) and speed (minutes per case) were evaluated in two clinical scenarios of varying complexity: hip fracture detection and coronary atherosclerotic plaque demarcation and stenosis grading. Results For hip fracture, 1050 radiographs were annotated over 7 days (150 studies per day; median speed: 10 seconds per study [interquartile range, 3-21 seconds per study]). A total of 294 coronary CT angiographic studies with 1843 arteries and branches were annotated for atherosclerotic plaque over 23 days (15.2 studies [80.1 vessels] per day; median speed: 6.08 minutes per study [interquartile range, 2.8-10.6 minutes per study] and 73 seconds per vessel [interquartile range, 20.9-155 seconds per vessel]). Conclusion GUI-component compatibility with common image analysis tools facilitates radiologist engagement in image data curation, including image annotation, supporting AI application development and evolution for medical imaging. When complemented by other GUI elements, a continuous integrated workflow supporting formation of an agile deep neural network life cycle results.Supplemental material is available for this article.© RSNA, 2019.
Collapse
Affiliation(s)
- Mutlu Demirer
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Sema Candemir
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Matthew T Bigelow
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Sarah M Yu
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Vikash Gupta
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Luciano M Prevedello
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Richard D White
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Joseph S Yu
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Rainer Grimmer
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Michael Wels
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Andreas Wimmer
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Abdul H Halabi
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Alvin Ihsani
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Thomas P O'Donnell
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| | - Barbaros S Erdal
- Department of Radiology, Laboratory for Augmented Intelligence in Imaging-Division of Medical Imaging Informatics, Ohio State University College of Medicine, OSU Wexner Medical Center, 395 W 12th Ave, Suite 452, Columbus, OH 43210 (M.D., S.C., M.T.B., S.M.Y., V.G., L.M.P., R.D.W., J.S.Y., B.S.E.); Siemens Healthineers, Erlangen, Germany (R.G., M.W., A.W.); NVIDIA, Santa Clara, Calif (A.H.H., A.I.); and Siemens Healthineers, Malvern, Pa (T.P.O.)
| |
Collapse
|
30
|
Hulsen T. The ten commandments of translational research informatics. DATA SCIENCE 2019; 2:341-352. [DOI: 10.3233/ds-190020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Translational research applies findings from basic science to enhance human health and well-being. In translational research projects, academia and industry work together to improve healthcare, often through public-private partnerships. This “translation” is often not easy, because it means that the so-called “valley of death” will need to be crossed: many interesting findings from fundamental research do not result in new treatments, diagnostics and prevention. To cross the valley of death, fundamental researchers need to collaborate with clinical researchers and with industry so that promising results can be implemented in a product. The success of translational research projects often does not depend only on the fundamental science and the applied science, but also on the informatics needed to connect everything: the translational research informatics. This informatics, which includes data management, data stewardship and data governance, enables researchers to store and analyze their ‘big data’ in a meaningful way, and enable application in the clinic. The author has worked on the information technology infrastructure for several translational research projects in oncology for the past nine years, and presents his lessons learned in this paper in the form of ten commandments. These commandments are not only useful for the data managers, but for all involved in a translational research project. Some of the commandments deal with topics that are currently in the spotlight, such as machine readability, the FAIR Guiding Principles and the GDPR regulations. Others are mentioned less in the literature, but are just as crucial for the success of a translational research project.
Collapse
Affiliation(s)
- Tim Hulsen
- Department of Professional Health Solutions & Services, Philips Research, Eindhoven, The Netherlands. E-mail:
| |
Collapse
|
31
|
Erdal BS, Prevedello LM, Qian S, Demirer M, Little K, Ryu J, O'Donnell T, White RD. Radiology and Enterprise Medical Imaging Extensions (REMIX). J Digit Imaging 2019; 31:91-106. [PMID: 28840365 PMCID: PMC5788816 DOI: 10.1007/s10278-017-0010-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Radiology and Enterprise Medical Imaging Extensions (REMIX) is a platform originally designed to both support the medical imaging-driven clinical and clinical research operational needs of Department of Radiology of The Ohio State University Wexner Medical Center. REMIX accommodates the storage and handling of “big imaging data,” as needed for large multi-disciplinary cancer-focused programs. The evolving REMIX platform contains an array of integrated tools/software packages for the following: (1) server and storage management; (2) image reconstruction; (3) digital pathology; (4) de-identification; (5) business intelligence; (6) texture analysis; and (7) artificial intelligence. These capabilities, along with documentation and guidance, explaining how to interact with a commercial system (e.g., PACS, EHR, commercial database) that currently exists in clinical environments, are to be made freely available.
Collapse
Affiliation(s)
- Barbaros S Erdal
- Radiology Department, The Ohio State University Wexner Medical Center, 395 W 12th Ave, Columbus, OH, 43210, USA.
| | - Luciano M Prevedello
- Radiology Department, The Ohio State University Wexner Medical Center, 395 W 12th Ave, Columbus, OH, 43210, USA
| | - Songyue Qian
- Radiology Department, The Ohio State University Wexner Medical Center, 395 W 12th Ave, Columbus, OH, 43210, USA
| | - Mutlu Demirer
- Radiology Department, The Ohio State University Wexner Medical Center, 395 W 12th Ave, Columbus, OH, 43210, USA
| | - Kevin Little
- Radiology Department, The Ohio State University Wexner Medical Center, 395 W 12th Ave, Columbus, OH, 43210, USA
| | - John Ryu
- Radiology Department, The Ohio State University Wexner Medical Center, 395 W 12th Ave, Columbus, OH, 43210, USA
| | - Thomas O'Donnell
- Siemens Medical Solutions USA, Inc, 40 Liberty Boulevard, Malvern, PA, 19355, USA
| | - Richard D White
- Radiology Department, The Ohio State University Wexner Medical Center, 395 W 12th Ave, Columbus, OH, 43210, USA
| |
Collapse
|
32
|
|
33
|
Herrmann MD, Clunie DA, Fedorov A, Doyle SW, Pieper S, Klepeis V, Le LP, Mutter GL, Milstone DS, Schultz TJ, Kikinis R, Kotecha GK, Hwang DH, Andriole KP, Iafrate AJ, Brink JA, Boland GW, Dreyer KJ, Michalski M, Golden JA, Louis DN, Lennerz JK. Implementing the DICOM Standard for Digital Pathology. J Pathol Inform 2018; 9:37. [PMID: 30533276 PMCID: PMC6236926 DOI: 10.4103/jpi.jpi_42_18] [Citation(s) in RCA: 71] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Accepted: 08/06/2018] [Indexed: 11/29/2022] Open
Abstract
Background: Digital Imaging and Communications in Medicine (DICOM®) is the standard for the representation, storage, and communication of medical images and related information. A DICOM file format and communication protocol for pathology have been defined; however, adoption by vendors and in the field is pending. Here, we implemented the essential aspects of the standard and assessed its capabilities and limitations in a multisite, multivendor healthcare network. Methods: We selected relevant DICOM attributes, developed a program that extracts pixel data and pixel-related metadata, integrated patient and specimen-related metadata, populated and encoded DICOM attributes, and stored DICOM files. We generated the files using image data from four vendor-specific image file formats and clinical metadata from two departments with different laboratory information systems. We validated the generated DICOM files using recognized DICOM validation tools and measured encoding, storage, and access efficiency for three image compression methods. Finally, we evaluated storing, querying, and retrieving data over the web using existing DICOM archive software. Results: Whole slide image data can be encoded together with relevant patient and specimen-related metadata as DICOM objects. These objects can be accessed efficiently from files or through RESTful web services using existing software implementations. Performance measurements show that the choice of image compression method has a major impact on data access efficiency. For lossy compression, JPEG achieves the fastest compression/decompression rates. For lossless compression, JPEG-LS significantly outperforms JPEG 2000 with respect to data encoding and decoding speed. Conclusion: Implementation of DICOM allows efficient access to image data as well as associated metadata. By leveraging a wealth of existing infrastructure solutions, the use of DICOM facilitates enterprise integration and data exchange for digital pathology.
Collapse
Affiliation(s)
| | | | - Andriy Fedorov
- Department of Radiology, Surgical Planning Laboratory, Brigham and Women's Hospital, Boston, MA, USA.,Harvard Medical School, Boston, MA, USA
| | - Sean W Doyle
- MGH and BWH Center for Clinical Data Science, Boston, MA, USA
| | | | - Veronica Klepeis
- Harvard Medical School, Boston, MA, USA.,Department of Pathology, Massachusetts General Hospital, Boston, MA, USA
| | - Long P Le
- Harvard Medical School, Boston, MA, USA.,Department of Pathology, Massachusetts General Hospital, Boston, MA, USA
| | - George L Mutter
- Harvard Medical School, Boston, MA, USA.,Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA
| | - David S Milstone
- Harvard Medical School, Boston, MA, USA.,Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA
| | - Thomas J Schultz
- Enterprise Medical Imaging, Massachusetts General Hospital, Boston, MA, USA
| | - Ron Kikinis
- Department of Radiology, Surgical Planning Laboratory, Brigham and Women's Hospital, Boston, MA, USA.,Harvard Medical School, Boston, MA, USA
| | - Gopal K Kotecha
- MGH and BWH Center for Clinical Data Science, Boston, MA, USA
| | - David H Hwang
- Harvard Medical School, Boston, MA, USA.,Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA
| | - Katherine P Andriole
- MGH and BWH Center for Clinical Data Science, Boston, MA, USA.,Harvard Medical School, Boston, MA, USA.,Department of Radiology, Brigham and Women's Hospital, Boston, MA, USA
| | - A John Iafrate
- Harvard Medical School, Boston, MA, USA.,Department of Pathology, Massachusetts General Hospital, Boston, MA, USA
| | - James A Brink
- Harvard Medical School, Boston, MA, USA.,Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Giles W Boland
- Harvard Medical School, Boston, MA, USA.,Department of Radiology, Brigham and Women's Hospital, Boston, MA, USA
| | - Keith J Dreyer
- MGH and BWH Center for Clinical Data Science, Boston, MA, USA.,Harvard Medical School, Boston, MA, USA.,Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Mark Michalski
- MGH and BWH Center for Clinical Data Science, Boston, MA, USA.,Harvard Medical School, Boston, MA, USA.,Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Jeffrey A Golden
- Harvard Medical School, Boston, MA, USA.,Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA
| | - David N Louis
- Harvard Medical School, Boston, MA, USA.,Department of Pathology, Massachusetts General Hospital, Boston, MA, USA
| | - Jochen K Lennerz
- Harvard Medical School, Boston, MA, USA.,Department of Pathology, Massachusetts General Hospital, Boston, MA, USA
| |
Collapse
|