1
|
Yap MH, Cassidy B, Byra M, Liao TY, Yi H, Galdran A, Chen YH, Brüngel R, Koitka S, Friedrich CM, Lo YW, Yang CH, Li K, Lao Q, Ballester MAG, Carneiro G, Ju YJ, Huang JD, Pappachan JM, Reeves ND, Chandrabalan V, Dancey D, Kendrick C. Diabetic foot ulcers segmentation challenge report: Benchmark and analysis. Med Image Anal 2024; 94:103153. [PMID: 38569380 DOI: 10.1016/j.media.2024.103153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 01/30/2024] [Accepted: 03/20/2024] [Indexed: 04/05/2024]
Abstract
Monitoring the healing progress of diabetic foot ulcers is a challenging process. Accurate segmentation of foot ulcers can help podiatrists to quantitatively measure the size of wound regions to assist prediction of healing status. The main challenge in this field is the lack of publicly available manual delineation, which can be time consuming and laborious. Recently, methods based on deep learning have shown excellent results in automatic segmentation of medical images, however, they require large-scale datasets for training, and there is limited consensus on which methods perform the best. The 2022 Diabetic Foot Ulcers segmentation challenge was held in conjunction with the 2022 International Conference on Medical Image Computing and Computer Assisted Intervention, which sought to address these issues and stimulate progress in this research domain. A training set of 2000 images exhibiting diabetic foot ulcers was released with corresponding segmentation ground truth masks. Of the 72 (approved) requests from 47 countries, 26 teams used this data to develop fully automated systems to predict the true segmentation masks on a test set of 2000 images, with the corresponding ground truth segmentation masks kept private. Predictions from participating teams were scored and ranked according to their average Dice similarity coefficient of the ground truth masks and prediction masks. The winning team achieved a Dice of 0.7287 for diabetic foot ulcer segmentation. This challenge has now entered a live leaderboard stage where it serves as a challenging benchmark for diabetic foot ulcer segmentation.
Collapse
Affiliation(s)
- Moi Hoon Yap
- Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester M1 5GD, United Kingdom; Lancashire Teaching Hospitals NHS Trust, Preston, PR2 9HT, United Kingdom.
| | - Bill Cassidy
- Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester M1 5GD, United Kingdom
| | - Michal Byra
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland; RIKEN Center for Brain Science, Wako, Japan
| | - Ting-Yu Liao
- Department of Computer Science, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan
| | - Huahui Yi
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Adrian Galdran
- BCN Medtech, Universitat Pompeu Fabra, Barcelona, Spain; AIML, University of Adelaide, Australia
| | - Yung-Han Chen
- Institute of Electronics, National Yang Ming Chiao Tung University, No. 1001, University Road, Hsinchu 300, Taiwan
| | - Raphael Brüngel
- Department of Computer Science, University of Applied Sciences and Arts Dortmund (FH Dortmund), Emil-Figge-Str. 42, 44227 Dortmund, Germany; Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), University Hospital Essen, Zweigertstr. 37, 45130 Essen, Germany; Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstr. 2, 45131 Essen, Germany
| | - Sven Koitka
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstr. 2, 45131 Essen, Germany; Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstr. 55, 45147 Essen, Germany
| | - Christoph M Friedrich
- Department of Computer Science, University of Applied Sciences and Arts Dortmund (FH Dortmund), Emil-Figge-Str. 42, 44227 Dortmund, Germany; Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), University Hospital Essen, Zweigertstr. 37, 45130 Essen, Germany
| | - Yu-Wen Lo
- Department of Computer Science, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan
| | - Ching-Hui Yang
- Department of Computer Science, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan
| | - Kang Li
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Qicheng Lao
- School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | | | | | - Yi-Jen Ju
- Institute of Electronics, National Yang Ming Chiao Tung University, No. 1001, University Road, Hsinchu 300, Taiwan
| | - Juinn-Dar Huang
- Institute of Electronics, National Yang Ming Chiao Tung University, No. 1001, University Road, Hsinchu 300, Taiwan
| | - Joseph M Pappachan
- Lancashire Teaching Hospitals NHS Trust, Preston, PR2 9HT, United Kingdom; Department of Life Sciences, Manchester Metropolitan University, Manchester, M1 5GD, United Kingdom
| | - Neil D Reeves
- Department of Life Sciences, Manchester Metropolitan University, Manchester, M1 5GD, United Kingdom
| | | | - Darren Dancey
- Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester M1 5GD, United Kingdom
| | - Connah Kendrick
- Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester M1 5GD, United Kingdom
| |
Collapse
|
2
|
Westhölter D, Haubold J, Welsner M, Salhöfer L, Wienker J, Sutharsan S, Straßburg S, Taube C, Umutlu L, Schaarschmidt BM, Koitka S, Zensen S, Forsting M, Nensa F, Hosch R, Opitz M. Elexacaftor/tezacaftor/ivacaftor influences body composition in adults with cystic fibrosis: a fully automated CT-based analysis. Sci Rep 2024; 14:9465. [PMID: 38658613 PMCID: PMC11043331 DOI: 10.1038/s41598-024-59622-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 04/11/2024] [Indexed: 04/26/2024] Open
Abstract
A poor nutritional status is associated with worse pulmonary function and survival in people with cystic fibrosis (pwCF). CF transmembrane conductance regulator modulators can improve pulmonary function and body weight, but more data is needed to evaluate its effects on body composition. In this retrospective study, a pre-trained deep-learning network was used to perform a fully automated body composition analysis on chest CTs from 66 adult pwCF before and after receiving elexacaftor/tezacaftor/ivacaftor (ETI) therapy. Muscle and adipose tissues were quantified and divided by bone volume to obtain body size-adjusted ratios. After receiving ETI therapy, marked increases were observed in all adipose tissue ratios among pwCF, including the total adipose tissue ratio (+ 46.21%, p < 0.001). In contrast, only small, but statistically significant increases of the muscle ratio were measured in the overall study population (+ 1.63%, p = 0.008). Study participants who were initially categorized as underweight experienced more pronounced effects on total adipose tissue ratio (p = 0.002), while gains in muscle ratio were equally distributed across BMI categories (p = 0.832). Our findings suggest that ETI therapy primarily affects adipose tissues, not muscle tissue, in adults with CF. These effects are primarily observed among pwCF who were initially underweight. Our findings may have implications for the future nutritional management of pwCF.
Collapse
Affiliation(s)
- Dirk Westhölter
- Department of Pulmonary Medicine, University Hospital Essen-Ruhrlandklinik, Essen, Germany
| | - Johannes Haubold
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Matthias Welsner
- Department of Pulmonary Medicine, University Hospital Essen-Ruhrlandklinik, Essen, Germany
- Adult Cystic Fibrosis Center, Department of Pulmonary Medicine, University Hospital Essen-Ruhrlandklinik, Essen, Germany
| | - Luca Salhöfer
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Johannes Wienker
- Department of Pulmonary Medicine, University Hospital Essen-Ruhrlandklinik, Essen, Germany
| | - Sivagurunathan Sutharsan
- Department of Pulmonary Medicine, University Hospital Essen-Ruhrlandklinik, Essen, Germany
- Adult Cystic Fibrosis Center, Department of Pulmonary Medicine, University Hospital Essen-Ruhrlandklinik, Essen, Germany
| | - Svenja Straßburg
- Department of Pulmonary Medicine, University Hospital Essen-Ruhrlandklinik, Essen, Germany
- Adult Cystic Fibrosis Center, Department of Pulmonary Medicine, University Hospital Essen-Ruhrlandklinik, Essen, Germany
| | - Christian Taube
- Department of Pulmonary Medicine, University Hospital Essen-Ruhrlandklinik, Essen, Germany
| | - Lale Umutlu
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Benedikt M Schaarschmidt
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Sven Koitka
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Sebastian Zensen
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Michael Forsting
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Felix Nensa
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - René Hosch
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Marcel Opitz
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany.
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.
| |
Collapse
|
3
|
Baldini G, Hosch R, Schmidt CS, Borys K, Kroll L, Koitka S, Haubold P, Pelka O, Nensa F, Haubold J. Addressing the Contrast Media Recognition Challenge: A Fully Automated Machine Learning Approach for Predicting Contrast Phases in CT Imaging. Invest Radiol 2024:00004424-990000000-00203. [PMID: 38436405 DOI: 10.1097/rli.0000000000001071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2024]
Abstract
OBJECTIVES Accurately acquiring and assigning different contrast-enhanced phases in computed tomography (CT) is relevant for clinicians and for artificial intelligence orchestration to select the most appropriate series for analysis. However, this information is commonly extracted from the CT metadata, which is often wrong. This study aimed at developing an automatic pipeline for classifying intravenous (IV) contrast phases and additionally for identifying contrast media in the gastrointestinal tract (GIT). MATERIALS AND METHODS This retrospective study used 1200 CT scans collected at the investigating institution between January 4, 2016 and September 12, 2022, and 240 CT scans from multiple centers from The Cancer Imaging Archive for external validation. The open-source segmentation algorithm TotalSegmentator was used to identify regions of interest (pulmonary artery, aorta, stomach, portal/splenic vein, liver, portal vein/hepatic veins, inferior vena cava, duodenum, small bowel, colon, left/right kidney, urinary bladder), and machine learning classifiers were trained with 5-fold cross-validation to classify IV contrast phases (noncontrast, pulmonary arterial, arterial, venous, and urographic) and GIT contrast enhancement. The performance of the ensembles was evaluated using the receiver operating characteristic area under the curve (AUC) and 95% confidence intervals (CIs). RESULTS For the IV phase classification task, the following AUC scores were obtained for the internal test set: 99.59% [95% CI, 99.58-99.63] for the noncontrast phase, 99.50% [95% CI, 99.49-99.52] for the pulmonary-arterial phase, 99.13% [95% CI, 99.10-99.15] for the arterial phase, 99.8% [95% CI, 99.79-99.81] for the venous phase, and 99.7% [95% CI, 99.68-99.7] for the urographic phase. For the external dataset, a mean AUC of 97.33% [95% CI, 97.27-97.35] and 97.38% [95% CI, 97.34-97.41] was achieved for all contrast phases for the first and second annotators, respectively. Contrast media in the GIT could be identified with an AUC of 99.90% [95% CI, 99.89-99.9] in the internal dataset, whereas in the external dataset, an AUC of 99.73% [95% CI, 99.71-99.73] and 99.31% [95% CI, 99.27-99.33] was achieved with the first and second annotator, respectively. CONCLUSIONS The integration of open-source segmentation networks and classifiers effectively classified contrast phases and identified GIT contrast enhancement using anatomical landmarks.
Collapse
Affiliation(s)
- Giulia Baldini
- From the Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Essen, Germany (G.B., R.H., K.B., L.K., S.K., F.N., J.H.); Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany (G.B., R.H., C.S.S., K.B., L.K., S.K., O.P., F.N., J.H.); Institute for Transfusion Medicine, University Hospital Essen, Essen, Germany (C.S.S.); Department of Diagnostic and Interventional Radiology, Kliniken Essen-Mitte, Essen, Germany (P.H.); and Data Integration Center, Central IT Department, University Hospital Essen, Essen, Germany (O.P., F.N.)
| | | | | | | | | | | | | | | | | | | |
Collapse
|
4
|
Keyl J, Bucher A, Jungmann F, Hosch R, Ziller A, Armbruster R, Malkomes P, Reissig TM, Koitka S, Tzianopoulos I, Keyl P, Kostbade K, Albers D, Markus P, Treckmann J, Nassenstein K, Haubold J, Makowski M, Forsting M, Baba HA, Kasper S, Siveke JT, Nensa F, Schuler M, Kaissis G, Kleesiek J, Braren R. Prognostic value of deep learning-derived body composition in advanced pancreatic cancer-a retrospective multicenter study. ESMO Open 2024; 9:102219. [PMID: 38194881 PMCID: PMC10837775 DOI: 10.1016/j.esmoop.2023.102219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 12/11/2023] [Accepted: 12/13/2023] [Indexed: 01/11/2024] Open
Abstract
BACKGROUND Despite the prognostic relevance of cachexia in pancreatic cancer, individual body composition has not been routinely integrated into treatment planning. In this multicenter study, we investigated the prognostic value of sarcopenia and myosteatosis automatically extracted from routine computed tomography (CT) scans of patients with advanced pancreatic ductal adenocarcinoma (PDAC). PATIENTS AND METHODS We retrospectively analyzed clinical imaging data of 601 patients from three German cancer centers. We applied a deep learning approach to assess sarcopenia by the abdominal muscle-to-bone ratio (MBR) and myosteatosis by the ratio of abdominal inter- and intramuscular fat to muscle volume. In the pooled cohort, univariable and multivariable analyses were carried out to analyze the association between body composition markers and overall survival (OS). We analyzed the relationship between body composition markers and laboratory values during the first year of therapy in a subgroup using linear regression analysis adjusted for age, sex, and American Joint Committee on Cancer (AJCC) stage. RESULTS Deep learning-derived MBR [hazard ratio (HR) 0.60, 95% confidence interval (CI) 0.47-0.77, P < 0.005] and myosteatosis (HR 3.73, 95% CI 1.66-8.39, P < 0.005) were significantly associated with OS in univariable analysis. In multivariable analysis, MBR (P = 0.019) and myosteatosis (P = 0.02) were associated with OS independent of age, sex, and AJCC stage. In a subgroup, MBR and myosteatosis were associated with albumin and C-reactive protein levels after initiation of therapy. Additionally, MBR was also associated with hemoglobin and total protein levels. CONCLUSIONS Our work demonstrates that deep learning can be applied across cancer centers to automatically assess sarcopenia and myosteatosis from routine CT scans. We highlight the prognostic role of our proposed markers and show a strong relationship with protein levels, inflammation, and anemia. In clinical practice, automated body composition analysis holds the potential to further personalize cancer treatment.
Collapse
Affiliation(s)
- J Keyl
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Essen, Germany; Institute of Pathology, University Hospital Essen (AöR), Essen, Germany.
| | - A Bucher
- Institute for Diagnostic and Interventional Radiology, Goethe University Frankfurt, Frankfurt am Main, Germany; German Cancer Consortium (DKTK), Frankfurt partner site, Heidelberg, Germany
| | - F Jungmann
- Institute of Diagnostic and Interventional Radiology, Technical University of Munich, School of Medicine, Munich, Germany; Artificial Intelligence in Healthcare and Medicine, School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
| | - R Hosch
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Essen, Germany
| | - A Ziller
- Institute of Diagnostic and Interventional Radiology, Technical University of Munich, School of Medicine, Munich, Germany; Artificial Intelligence in Healthcare and Medicine, School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
| | - R Armbruster
- Institute for Diagnostic and Interventional Radiology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - P Malkomes
- Department of General, Visceral and Transplant Surgery, Goethe University Hospital Frankfurt, Frankfurt am Main, Germany
| | - T M Reissig
- Department of Medical Oncology, University Hospital Essen (AöR), Essen, Germany; West German Cancer Center, University Hospital Essen (AöR), Essen, Germany; German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany; Bridge Institute of Experimental Tumor Therapy, West German Cancer Center, University Hospital Essen, University of Duisburg-Essen, Essen, Germany; Division of Solid Tumor Translational Oncology, German Cancer Consortium (DKTK Partner Site Essen) and German Cancer Research Center, DKFZ, Heidelberg, Germany
| | - S Koitka
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Essen, Germany; Institute for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen (AöR), Essen, Germany
| | - I Tzianopoulos
- Department of Medical Oncology, University Hospital Essen (AöR), Essen, Germany; West German Cancer Center, University Hospital Essen (AöR), Essen, Germany; Bridge Institute of Experimental Tumor Therapy, West German Cancer Center, University Hospital Essen, University of Duisburg-Essen, Essen, Germany; Division of Solid Tumor Translational Oncology, German Cancer Consortium (DKTK Partner Site Essen) and German Cancer Research Center, DKFZ, Heidelberg, Germany
| | - P Keyl
- Institute of Pathology, Ludwig-Maximilians-University Munich, Munich, Germany
| | - K Kostbade
- Department of Medical Oncology, University Hospital Essen (AöR), Essen, Germany; West German Cancer Center, University Hospital Essen (AöR), Essen, Germany; Medical Faculty, University of Duisburg-Essen, Essen, Germany
| | - D Albers
- Department of Gastroenterology, Elisabeth Hospital Essen, Essen, Germany
| | - P Markus
- Department of General Surgery and Traumatology, Elisabeth Hospital Essen, Essen, Germany
| | - J Treckmann
- West German Cancer Center, University Hospital Essen (AöR), Essen, Germany; Medical Faculty, University of Duisburg-Essen, Essen, Germany; Department of General, Visceral and Transplant Surgery, University Hospital Essen, Essen, Germany
| | - K Nassenstein
- Institute for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen (AöR), Essen, Germany; Medical Faculty, University of Duisburg-Essen, Essen, Germany
| | - J Haubold
- Institute for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen (AöR), Essen, Germany; Medical Faculty, University of Duisburg-Essen, Essen, Germany
| | - M Makowski
- Institute of Diagnostic and Interventional Radiology, Technical University of Munich, School of Medicine, Munich, Germany
| | - M Forsting
- German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany; Institute for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen (AöR), Essen, Germany; Medical Faculty, University of Duisburg-Essen, Essen, Germany
| | - H A Baba
- Institute of Pathology, University Hospital Essen (AöR), Essen, Germany; Medical Faculty, University of Duisburg-Essen, Essen, Germany
| | - S Kasper
- Department of Medical Oncology, University Hospital Essen (AöR), Essen, Germany; West German Cancer Center, University Hospital Essen (AöR), Essen, Germany; German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany; Medical Faculty, University of Duisburg-Essen, Essen, Germany
| | - J T Siveke
- Department of Medical Oncology, University Hospital Essen (AöR), Essen, Germany; West German Cancer Center, University Hospital Essen (AöR), Essen, Germany; German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany; Bridge Institute of Experimental Tumor Therapy, West German Cancer Center, University Hospital Essen, University of Duisburg-Essen, Essen, Germany; Division of Solid Tumor Translational Oncology, German Cancer Consortium (DKTK Partner Site Essen) and German Cancer Research Center, DKFZ, Heidelberg, Germany; Medical Faculty, University of Duisburg-Essen, Essen, Germany
| | - F Nensa
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Essen, Germany; German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany; Institute for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen (AöR), Essen, Germany; Medical Faculty, University of Duisburg-Essen, Essen, Germany
| | - M Schuler
- Department of Medical Oncology, University Hospital Essen (AöR), Essen, Germany; West German Cancer Center, University Hospital Essen (AöR), Essen, Germany; Medical Faculty, University of Duisburg-Essen, Essen, Germany; National Center for Tumor Diseases (NCT), NCT West, Essen, Germany
| | - G Kaissis
- Institute of Diagnostic and Interventional Radiology, Technical University of Munich, School of Medicine, Munich, Germany; Artificial Intelligence in Healthcare and Medicine, School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
| | - J Kleesiek
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Essen, Germany; West German Cancer Center, University Hospital Essen (AöR), Essen, Germany; German Cancer Consortium (DKTK), Partner site University Hospital Essen (AöR), Essen, Germany; Medical Faculty, University of Duisburg-Essen, Essen, Germany
| | - R Braren
- Institute of Diagnostic and Interventional Radiology, Technical University of Munich, School of Medicine, Munich, Germany; German Cancer Consortium (DKTK), Munich partner site, Heidelberg, Germany
| |
Collapse
|
5
|
Engelke M, Schmidt CS, Baldini G, Parmar V, Hosch R, Borys K, Koitka S, Turki AT, Haubold J, Horn PA, Nensa F. Optimizing platelet transfusion through a personalized deep learning risk assessment system for demand management. Blood 2023; 142:2315-2326. [PMID: 37890142 DOI: 10.1182/blood.2023021172] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 09/29/2023] [Accepted: 10/17/2023] [Indexed: 10/29/2023] Open
Abstract
ABSTRACT Platelet demand management (PDM) is a resource-consuming task for physicians and transfusion managers of large hospitals. Inpatient numbers and institutional standards play significant roles in PDM. However, reliance on these factors alone commonly results in platelet shortages. Using data from multiple sources, we developed, validated, tested, and implemented a patient-specific approach to support PDM that uses a deep learning-based risk score to forecast platelet transfusions for each hospitalized patient in the next 24 hours. The models were developed using retrospective electronic health record data of 34 809 patients treated between 2017 and 2022. Static and time-dependent features included demographics, diagnoses, procedures, blood counts, past transfusions, hematotoxic medications, and hospitalization duration. Using an expanding window approach, we created a training and live-prediction pipeline with a 30-day input and 24-hour forecast. Hyperparameter tuning determined the best validation area under the precision-recall curve (AUC-PR) score for long short-term memory deep learning models, which were then tested on independent data sets from the same hospital. The model tailored for hematology and oncology patients exhibited the best performance (AUC-PR, 0.84; area under the receiver operating characteristic curve [ROC-AUC], 0.98), followed by a multispecialty model covering all other patients (AUC-PR, 0.73). The model specific to cardiothoracic surgery had the lowest performance (AUC-PR, 0.42), likely because of unexpected intrasurgery bleedings. To our knowledge, this is the first deep learning-based platelet transfusion predictor enabling individualized 24-hour risk assessments at high AUC-PR. Implemented as a decision-support system, deep-learning forecasts might improve patient care by detecting platelet demand earlier and preventing critical transfusion shortages.
Collapse
Affiliation(s)
- Merlin Engelke
- Institute for Artificial Intelligence in Medicine, University Medicine Essen, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Medicine Essen, Essen, Germany
| | - Cynthia Sabrina Schmidt
- Institute for Artificial Intelligence in Medicine, University Medicine Essen, Essen, Germany
- Institute for Transfusion Medicine, University Medicine Essen, Essen, Germany
| | - Giulia Baldini
- Institute for Artificial Intelligence in Medicine, University Medicine Essen, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Medicine Essen, Essen, Germany
| | - Vicky Parmar
- Institute for Artificial Intelligence in Medicine, University Medicine Essen, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Medicine Essen, Essen, Germany
| | - René Hosch
- Institute for Artificial Intelligence in Medicine, University Medicine Essen, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Medicine Essen, Essen, Germany
| | - Katarzyna Borys
- Institute for Artificial Intelligence in Medicine, University Medicine Essen, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Medicine Essen, Essen, Germany
| | - Sven Koitka
- Institute for Artificial Intelligence in Medicine, University Medicine Essen, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Medicine Essen, Essen, Germany
| | - Amin T Turki
- Computational Hematology Laboratory, Department of Hematology and Stem Cell Transplantation, West-German Cancer Center, University Medicine Essen, Essen, Germany
- Department of Hematology and Oncology, Marienhospital University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Johannes Haubold
- Institute for Artificial Intelligence in Medicine, University Medicine Essen, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Medicine Essen, Essen, Germany
| | - Peter A Horn
- Institute for Transfusion Medicine, University Medicine Essen, Essen, Germany
| | - Felix Nensa
- Institute for Artificial Intelligence in Medicine, University Medicine Essen, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Medicine Essen, Essen, Germany
| |
Collapse
|
6
|
Haubold J, Baldini G, Parmar V, Schaarschmidt BM, Koitka S, Kroll L, van Landeghem N, Umutlu L, Forsting M, Nensa F, Hosch R. BOA: A CT-Based Body and Organ Analysis for Radiologists at the Point of Care. Invest Radiol 2023:00004424-990000000-00176. [PMID: 37994150 DOI: 10.1097/rli.0000000000001040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2023]
Abstract
PURPOSE The study aimed to develop the open-source body and organ analysis (BOA), a comprehensive computed tomography (CT) image segmentation algorithm with a focus on workflow integration. METHODS The BOA combines 2 segmentation algorithms: body composition analysis (BCA) and TotalSegmentator. The BCA was trained with the nnU-Net framework using a dataset including 300 CT examinations. The CTs were manually annotated with 11 semantic body regions: subcutaneous tissue, muscle, bone, abdominal cavity, thoracic cavity, glands, mediastinum, pericardium, breast implant, brain, and spinal cord. The models were trained using 5-fold cross-validation, and at inference time, an ensemble was used. Afterward, the segmentation efficiency was evaluated on a separate test set comprising 60 CT scans. In a postprocessing step, a tissue segmentation (muscle, subcutaneous adipose tissue, visceral adipose tissue, intermuscular adipose tissue, epicardial adipose tissue, and paracardial adipose tissue) is created by subclassifying the body regions. The BOA combines this algorithm and the open-source segmentation software TotalSegmentator to have an all-in-one comprehensive selection of segmentations. In addition, it integrates into clinical workflows as a DICOM node-triggered service using the open-source Orthanc research PACS (Picture Archiving and Communication System) server to make the automated segmentation algorithms available to clinicians. The BCA model's performance was evaluated using the Sørensen-Dice score. Finally, the segmentations from the 3 different tools (BCA, TotalSegmentator, and BOA) were compared by assessing the overall percentage of the segmented human body on a separate cohort of 150 whole-body CT scans. RESULTS The results showed that the BCA outperformed the previous publication, achieving a higher Sørensen-Dice score for the previously existing classes, including subcutaneous tissue (0.971 vs 0.962), muscle (0.959 vs 0.933), abdominal cavity (0.983 vs 0.973), thoracic cavity (0.982 vs 0.965), bone (0.961 vs 0.942), and an overall good segmentation efficiency for newly introduced classes: brain (0.985), breast implant (0.943), glands (0.766), mediastinum (0.880), pericardium (0.964), and spinal cord (0.896). All in all, it achieved a 0.935 average Sørensen-Dice score, which is comparable to the one of the TotalSegmentator (0.94). The TotalSegmentator had a mean voxel body coverage of 31% ± 6%, whereas BCA had a coverage of 75% ± 6% and BOA achieved 93% ± 2%. CONCLUSIONS The open-source BOA merges different segmentation algorithms with a focus on workflow integration through DICOM node integration, offering a comprehensive body segmentation in CT images with a high coverage of the body volume.
Collapse
Affiliation(s)
- Johannes Haubold
- From the Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany (J.H., G.B., V.P., B.M.S., S.K., L.K., N.v.L., L.U., M.F., F.N., R.H.); and Institute of Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany (J.H., G.B., V.P., S.K., L.U., M.F., F.N., R.H.)
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
7
|
Tamulevicius M, Oezcelik A, Koitka S, Theysohn JM, Hoyer DP, Farzaliyev F, Haubold J, Nensa F, Treckmann J, Malamutmann E. Preoperative Computed Tomography Volumetry and Graft Weight Estimation of Left Lateral Segment in Pediatric Living Donor Liver Transplant. EXP CLIN TRANSPLANT 2023; 21:831-836. [PMID: 37965959 DOI: 10.6002/ect.2023.0176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2023]
Abstract
OBJECTIVES Liver volumetry based on a computed tomography scan is widely used to estimate liver volume before any liver resection, especially before living donorliver donation. The 1-to-1 conversion rule for liver volume to liver weight has been widely adopted; however, debate continues regarding this approach. Therefore, we analyzed the relationship between the left-lateral lobe liver graft volume and actual graft weight. MATERIALS AND METHODS This study retrospectively included consecutive donors who underwent left lateral hepatectomy for pediatric living donor liver transplant from December 2008 to September 2020. All donors were healthy adults who met the evaluation criteria for pediatric living donor liver transplant and underwent a preoperative contrast-enhanced computed tomography scan. Manual segmentation of the leftlateral liverlobe for graft volume estimation and intraoperative measurement of an actual graft weight were performed. The relationship between estimated graft volume and actual graft weight was analyzed. RESULTS Ninety-four living liver donors were included in the study. The mean actual graft weight was ~283.4 ± 68.5 g, and the mean graft volume was 244.9 ± 63.86 mL. A strong correlation was shown between graft volume and actual graft weight (r = 0.804; P < .001). Bland-Altman analysis revealed an interobserver agreement of 38.0 ± 97.25, and intraclass correlation coefficient showed almost perfect agreement(r = 0.840; P < .001). The conversion formula for calculating graft weight based on computed tomography volumetry was determined based on regression analysis: 0.88 × graft volume + 41.63. CONCLUSIONS The estimation of left liver graft weight using only the 1-to-1 rule is subject to measurable variability in calculated graft weights and tends to underestimate the true graft weight. Instead, a different, improved conversion formula should be used to calculate graft weight to more accurately determine donor graft weight-to-recipient body weightratio and reduce the risk of underestimation of liver graft weightin the donor selection process before pediatric living donor liver transplant.
Collapse
Affiliation(s)
- Martynas Tamulevicius
- From the University Hospital Essen, Department of General, Visceral and Transplantation Surgery, Essen, Germany
| | | | | | | | | | | | | | | | | | | |
Collapse
|
8
|
Engelke M, Brieske CM, Parmar V, Flaschel N, Kureishi A, Hosch R, Koitka S, Schmidt CS, Horn PA, Nensa F. Predicting Individual Patient Platelet Demand in a Large Tertiary Care Hospital Using Machine Learning. Transfus Med Hemother 2023; 50:277-285. [PMID: 37767277 PMCID: PMC10521242 DOI: 10.1159/000528428] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 11/29/2022] [Indexed: 09/29/2023] Open
Abstract
Introduction An increasing shortage of donor blood is expected, considering the demographic change in Germany. Due to the short shelf life and varying daily fluctuations in consumption, the storage of platelet concentrates (PCs) becomes challenging. This emphasizes the need for reliable prediction of needed PCs for the blood bank inventories. Therefore, the objective of this study was to evaluate multimodal data from multiple source systems within a hospital to predict the number of platelet transfusions in 3 days on a per-patient level. Methods Data were collected from 25,190 (42% female and 58% male) patients between 2017 and 2021. For each patient, the number of received PCs, platelet count blood tests, drugs causing thrombocytopenia, acute platelet diseases, procedures, age, gender, and the period of a patient's hospital stay were collected. Two models were trained on samples using a sliding window of 7 days as input and a day 3 target. The model predicts whether a patient will be transfused 3 days in the future. The model was trained with an excessive hyperparameter search using patient-level repeated 5-fold cross-validation to optimize the average macro F2-score. Results The trained models were tested on 5,022 unique patients. The best-performing model has a specificity of 0.99, a sensitivity of 0.37, an area under the precision-recall curve score of 0.45, an MCC score of 0.43, and an F1-score of 0.43. However, the model does not generalize well for cases when the need for a platelet transfusion is recognized. Conclusion A patient AI-based platelet forecast could improve logistics management and reduce blood product waste. In this study, we build the first model to predict patient individual platelet demand. To the best of our knowledge, we are the first to introduce this approach. Our model predicts the need for platelet units for 3 days in the future. While sensitivity underperforms, specificity performs reliably. The model may be of clinical use as a pretest for potential patients needing a platelet transfusion within the next 3 days. As sensitivity needs to be improved, further studies should introduce deep learning and wider patient characterization to the methodological multimodal, multisource data approach. Furthermore, a hospital-wide consumption of PCs could be derived from individual predictions.
Collapse
Affiliation(s)
- Merlin Engelke
- University Medicine Essen, Institute for Artificial Intelligence in Medicine, Essen, Germany
- University Medicine Essen, Institute of Diagnostic and Interventional Radiology and Neuroradiology, Essen, Germany
| | | | - Vicky Parmar
- University Medicine Essen, Institute for Artificial Intelligence in Medicine, Essen, Germany
- University Medicine Essen, Institute of Diagnostic and Interventional Radiology and Neuroradiology, Essen, Germany
| | - Nils Flaschel
- University Medicine Essen, Institute for Artificial Intelligence in Medicine, Essen, Germany
- University Medicine Essen, Institute of Diagnostic and Interventional Radiology and Neuroradiology, Essen, Germany
| | - Anisa Kureishi
- University Medicine Essen, Institute for Artificial Intelligence in Medicine, Essen, Germany
| | - Rene Hosch
- University Medicine Essen, Institute for Artificial Intelligence in Medicine, Essen, Germany
- University Medicine Essen, Institute of Diagnostic and Interventional Radiology and Neuroradiology, Essen, Germany
| | - Sven Koitka
- University Medicine Essen, Institute for Artificial Intelligence in Medicine, Essen, Germany
- University Medicine Essen, Institute of Diagnostic and Interventional Radiology and Neuroradiology, Essen, Germany
| | | | - Peter A. Horn
- University Medicine Essen, Institute for Transfusion Medicine, Essen, Germany
| | - Felix Nensa
- University Medicine Essen, Institute for Artificial Intelligence in Medicine, Essen, Germany
- University Medicine Essen, Institute of Diagnostic and Interventional Radiology and Neuroradiology, Essen, Germany
| |
Collapse
|
9
|
Hosch R, Baldini G, Parmar V, Borys K, Koitka S, Engelke M, Arzideh K, Ulrich M, Nensa F. FHIR-PYrate: a data science friendly Python package to query FHIR servers. BMC Health Serv Res 2023; 23:734. [PMID: 37415138 DOI: 10.1186/s12913-023-09498-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 05/03/2023] [Indexed: 07/08/2023] Open
Abstract
BACKGROUND We present FHIR-PYrate, a Python package to handle the full clinical data collection and extraction process. The software is to be plugged into a modern hospital domain, where electronic patient records are used to handle the entire patient's history. Most research institutes follow the same procedures to build study cohorts, but mainly in a non-standardized and repetitive way. As a result, researchers spend time writing boilerplate code, which could be used for more challenging tasks. METHODS The package can improve and simplify existing processes in the clinical research environment. It collects all needed functionalities into a straightforward interface that can be used to query a FHIR server, download imaging studies and filter clinical documents. The full capacity of the search mechanism of the FHIR REST API is available to the user, leading to a uniform querying process for all resources, thus simplifying the customization of each use case. Additionally, valuable features like parallelization and filtering are included to make it more performant. RESULTS As an exemplary practical application, the package can be used to analyze the prognostic significance of routine CT imaging and clinical data in breast cancer with tumor metastases in the lungs. In this example, the initial patient cohort is first collected using ICD-10 codes. For these patients, the survival information is also gathered. Some additional clinical data is retrieved, and CT scans of the thorax are downloaded. Finally, the survival analysis can be computed using a deep learning model with the CT scans, the TNM staging and positivity of relevant markers as input. This process may vary depending on the FHIR server and available clinical data, and can be customized to cover even more use cases. CONCLUSIONS FHIR-PYrate opens up the possibility to quickly and easily retrieve FHIR data, download image data, and search medical documents for keywords within a Python package. With the demonstrated functionality, FHIR-PYrate opens an easy way to assemble research collectives automatically.
Collapse
Affiliation(s)
- René Hosch
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany
| | - Giulia Baldini
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany.
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany.
| | - Vicky Parmar
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany
| | - Katarzyna Borys
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany
| | - Sven Koitka
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany
| | - Merlin Engelke
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany
| | - Kamyar Arzideh
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany
- Central IT Department, Data Integration Center, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany
| | - Moritz Ulrich
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany
- Central IT Department, Data Integration Center, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany
| | - Felix Nensa
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, Essen, 45147, Germany
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Girardetstraße 2, Essen, 45131, Germany
| |
Collapse
|
10
|
Alatzides GL, Haubold J, Steinberg HL, Koitka S, Parmar V, Grueneisen J, Zeller AC, Schmidt H, Theysohn JM, Li Y, Nensa F, Schaarschmidt BM. Adipopenia in body composition analysis: a promising imaging biomarker and potential predictive factor for patients undergoing transjugular intrahepatic portosystemic shunt placement. Br J Radiol 2023; 96:20220863. [PMID: 37086078 DOI: 10.1259/bjr.20220863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2023] Open
Abstract
OBJECTIVE Body tissue composition plays a crucial role in the multisystemic processes of advanced liver disease and has been shown to be influenced by transjugular intrahepatic portosystemic shunt (TIPS). A differentiated analysis of the various tissue compartments has not been performed until now. The purpose of this study was to evaluate the value of imaging biomarkers derived from automated body composition analysis (BCA) to predict clinical and functional outcome. METHODS A retrospective analysis of 56 patients undergoing TIPS procedure between 2013 and 2021 was performed. BCA on the base of pre-interventional CT examination was used to determine quantitative data as well as ratios of bone, muscle and fat masses. Furthermore, a BCA-derived sarcopenia marker was investigated. Regarding potential correlations between BCA imaging biomarkers and the occurrence of hepatic encephalopathy (HE) as well as 1-year survival, an exploratory analysis was conducted. RESULTS No BCA imaging biomarker was associated with the occurrence of HE after TIPS placement. However, there were significant differences in alive and deceased patients regarding the BCA-derived sarcopenia marker (alive: 1.60, deceased: 1.83, p = 0.046), ratios of intra- and intermuscular fat/skeletal volume (alive: 0.53, deceased: 0.31, p = 0.015) and intra- and intermuscular fat/muscle volume (alive: 0.21, deceased: 0.14, p = 0.031). CONCLUSION A lower amount of intra- and intermuscular adipose tissue might have protective effects regarding liver derived complications and survival. ADVANCES IN KNOWLEDGE Precise characterization of body tissue components with automated BCA might provide prognostic information in patients with advanced liver disease undergoing TIPS procedure.
Collapse
Affiliation(s)
- Georgios Luca Alatzides
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstr, Essen, Germany
| | - Johannes Haubold
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstr, Essen, Germany
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Essen, Hufelandstr, Germany
| | - Hannah Luisa Steinberg
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstr, Essen, Germany
| | - Sven Koitka
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstr, Essen, Germany
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Essen, Hufelandstr, Germany
| | - Vicky Parmar
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Essen, Hufelandstr, Germany
| | - Johannes Grueneisen
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstr, Essen, Germany
| | - Amos Cornelius Zeller
- Department of Gastroenterology and Hepatology, University Hospital Essen, Hufelandstr, Essen, Germany
| | - Hartmut Schmidt
- Department of Gastroenterology and Hepatology, University Hospital Essen, Hufelandstr, Essen, Germany
| | - Jens Matthias Theysohn
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstr, Essen, Germany
| | - Yan Li
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstr, Essen, Germany
| | - Felix Nensa
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstr, Essen, Germany
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Essen, Hufelandstr, Germany
| | - Benedikt Michael Schaarschmidt
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstr, Essen, Germany
| |
Collapse
|
11
|
Koitka S, Gudlin P, Theysohn JM, Oezcelik A, Hoyer DP, Dayangac M, Hosch R, Haubold J, Flaschel N, Nensa F, Malamutmann E. Fully automated preoperative liver volumetry incorporating the anatomical location of the central hepatic vein. Sci Rep 2022; 12:16479. [PMID: 36183002 PMCID: PMC9526715 DOI: 10.1038/s41598-022-20778-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 09/19/2022] [Indexed: 11/12/2022] Open
Abstract
The precise preoperative calculation of functional liver volumes is essential prior major liver resections, as well as for the evaluation of a suitable donor for living donor liver transplantation. The aim of this study was to develop a fully automated, reproducible, and quantitative 3D volumetry of the liver from standard CT examinations of the abdomen as part of routine clinical imaging. Therefore, an in-house dataset of 100 venous phase CT examinations for training and 30 venous phase ex-house CT examinations with a slice thickness of 5 mm for testing and validating were fully annotated with right and left liver lobe. Multi-Resolution U-Net 3D neural networks were employed for segmenting these liver regions. The Sørensen-Dice coefficient was greater than 0.9726 ± 0.0058, 0.9639 ± 0.0088, and 0.9223 ± 0.0187 and a mean volume difference of 32.12 ± 19.40 ml, 22.68 ± 21.67 ml, and 9.44 ± 27.08 ml compared to the standard of reference (SoR) liver, right lobe, and left lobe annotation was achieved. Our results show that fully automated 3D volumetry of the liver on routine CT imaging can provide reproducible, quantitative, fast and accurate results without needing any examiner in the preoperative work-up for hepatobiliary surgery and especially for living donor liver transplantation.
Collapse
Affiliation(s)
- Sven Koitka
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.,Institute of Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Phillip Gudlin
- Department of General, Visceral and Transplantation Surgery, University Hospital Essen, Essen, Germany
| | - Jens M Theysohn
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Arzu Oezcelik
- Department of General, Visceral and Transplantation Surgery, University Hospital Essen, Essen, Germany
| | - Dieter P Hoyer
- Department of General, Visceral and Transplantation Surgery, University Hospital Essen, Essen, Germany
| | - Murat Dayangac
- Department of Surgery, Medipol University Hospital, Istanbul, Turkey
| | - René Hosch
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.,Institute of Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Johannes Haubold
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Nils Flaschel
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.,Institute of Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Felix Nensa
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany. .,Institute of Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany.
| | - Eugen Malamutmann
- Department of General, Visceral and Transplantation Surgery, University Hospital Essen, Essen, Germany
| |
Collapse
|
12
|
Kroll L, Mathew A, Baldini G, Hosch R, Koitka S, Kleesiek J, Rischpler C, Haubold J, Fuhrer D, Nensa F, Lahner H. CT-derived body composition analysis could possibly replace DXA and BIA to monitor NET-patients. Sci Rep 2022; 12:13419. [PMID: 35927564 PMCID: PMC9352897 DOI: 10.1038/s41598-022-17611-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 07/28/2022] [Indexed: 12/03/2022] Open
Abstract
Patients with neuroendocrine tumors of gastro-entero-pancreatic origin (GEP-NET) experience changes in fat and muscle composition. Dual-energy X-ray absorptiometry (DXA) and bioelectrical impedance analysis (BIA) are currently used to analyze body composition. Changes thereof could indicate cancer progression or response to treatment. This study examines the correlation between CT-based (computed tomography) body composition analysis (BCA) and DXA or BIA measurement. 74 GEP-NET-patients received whole-body [68Ga]-DOTATOC-PET/CT, BIA, and DXA-scans. BCA was performed based on the non-contrast-enhanced, 5 mm, whole-body-CT images. BCA from CT shows a strong correlation between body fat ratio with DXA (r = 0.95, ρC = 0.83) and BIA (r = 0.92, ρC = 0.76) and between skeletal muscle ratio with BIA: r = 0.81, ρC = 0.49. The deep learning-network achieves highly accurate results (mean Sørensen-Dice-score 0.93). Using BCA on routine Positron emission tomography/CT-scans to monitor patients’ body composition in the diagnostic workflow can reduce additional exams whilst substantially amplifying measurement in slower progressing cancers such as GEP-NET.
Collapse
Affiliation(s)
- Lennard Kroll
- Institute for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany. .,Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany.
| | - Annie Mathew
- Department of Endocrinology, Diabetes and Metabolism and Division of Laboratory Research, University Hospital Essen, Essen, Germany
| | - Giulia Baldini
- Institute for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.,Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - René Hosch
- Institute for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.,Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Sven Koitka
- Institute for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.,Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | | | - Johannes Haubold
- Institute for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.,Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Dagmar Fuhrer
- Department of Endocrinology, Diabetes and Metabolism and Division of Laboratory Research, University Hospital Essen, Essen, Germany
| | - Felix Nensa
- Institute for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.,Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Harald Lahner
- Department of Endocrinology, Diabetes and Metabolism and Division of Laboratory Research, University Hospital Essen, Essen, Germany
| |
Collapse
|
13
|
Hosch R, Weber M, Sraieb M, Flaschel N, Haubold J, Kim MS, Umutlu L, Kleesiek J, Herrmann K, Nensa F, Rischpler C, Koitka S, Seifert R, Kersting D. Artificial intelligence guided enhancement of digital PET: scans as fast as CT? Eur J Nucl Med Mol Imaging 2022; 49:4503-4515. [PMID: 35904589 PMCID: PMC9606065 DOI: 10.1007/s00259-022-05901-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 06/30/2022] [Indexed: 12/03/2022]
Abstract
Purpose Both digital positron emission tomography (PET) detector technologies and artificial intelligence based image post-reconstruction methods allow to reduce the PET acquisition time while maintaining diagnostic quality. The aim of this study was to acquire ultra-low-count fluorodeoxyglucose (FDG) ExtremePET images on a digital PET/computed tomography (CT) scanner at an acquisition time comparable to a CT scan and to generate synthetic full-dose PET images using an artificial neural network. Methods This is a prospective, single-arm, single-center phase I/II imaging study. A total of 587 patients were included. For each patient, a standard and an ultra-low-count FDG PET/CT scan (whole-body acquisition time about 30 s) were acquired. A modified pix2pixHD deep-learning network was trained employing 387 data sets as training and 200 as test cohort. Three models (PET-only and PET/CT with or without group convolution) were compared. Detectability and quantification were evaluated. Results The PET/CT input model with group convolution performed best regarding lesion signal recovery and was selected for detailed evaluation. Synthetic PET images were of high visual image quality; mean absolute lesion SUVmax (maximum standardized uptake value) difference was 1.5. Patient-based sensitivity and specificity for lesion detection were 79% and 100%, respectively. Not-detected lesions were of lower tracer uptake and lesion volume. In a matched-pair comparison, patient-based (lesion-based) detection rate was 89% (78%) for PERCIST (PET response criteria in solid tumors)-measurable and 36% (22%) for non PERCIST-measurable lesions. Conclusion Lesion detectability and lesion quantification were promising in the context of extremely fast acquisition times. Possible application scenarios might include re-staging of late-stage cancer patients, in whom assessment of total tumor burden can be of higher relevance than detailed evaluation of small and low-uptake lesions. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-022-05901-x.
Collapse
Affiliation(s)
- René Hosch
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany. .,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany.
| | - Manuel Weber
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Miriam Sraieb
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Nils Flaschel
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Johannes Haubold
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Moon-Sung Kim
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Lale Umutlu
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Ken Herrmann
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Felix Nensa
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Christoph Rischpler
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Sven Koitka
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Robert Seifert
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany.,Department of Nuclear Medicine, University Hospital Münster, University of Münster, Albert-Schweitzer-Campus 1, 48149, Münster, Germany
| | - David Kersting
- Department of Nuclear Medicine and German Cancer Consortium (DKTK), University Hospital Essen, University of Duisburg-Essen, Hufelandstraße 55, 45147, Essen, Germany
| |
Collapse
|
14
|
Haubold J, Hosch R, Umutlu L, Wetter A, Haubold P, Radbruch A, Forsting M, Nensa F, Koitka S. Contrast agent dose reduction in computed tomography with deep learning using a conditional generative adversarial network. Eur Radiol 2021; 31:6087-6095. [PMID: 33630160 PMCID: PMC8270814 DOI: 10.1007/s00330-021-07714-2] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Revised: 12/13/2020] [Accepted: 01/21/2021] [Indexed: 01/02/2023]
Abstract
OBJECTIVES To reduce the dose of intravenous iodine-based contrast media (ICM) in CT through virtual contrast-enhanced images using generative adversarial networks. METHODS Dual-energy CTs in the arterial phase of 85 patients were randomly split into an 80/20 train/test collective. Four different generative adversarial networks (GANs) based on image pairs, which comprised one image with virtually reduced ICM and the original full ICM CT slice, were trained, testing two input formats (2D and 2.5D) and two reduced ICM dose levels (-50% and -80%). The amount of intravenous ICM was reduced by creating virtual non-contrast series using dual-energy and adding the corresponding percentage of the iodine map. The evaluation was based on different scores (L1 loss, SSIM, PSNR, FID), which evaluate the image quality and similarity. Additionally, a visual Turing test (VTT) with three radiologists was used to assess the similarity and pathological consistency. RESULTS The -80% models reach an SSIM of > 98%, PSNR of > 48, L1 of between 7.5 and 8, and an FID of between 1.6 and 1.7. In comparison, the -50% models reach a SSIM of > 99%, PSNR of > 51, L1 of between 6.0 and 6.1, and an FID between 0.8 and 0.95. For the crucial question of pathological consistency, only the 50% ICM reduction networks achieved 100% consistency, which is required for clinical use. CONCLUSIONS The required amount of ICM for CT can be reduced by 50% while maintaining image quality and diagnostic accuracy using GANs. Further phantom studies and animal experiments are required to confirm these initial results. KEY POINTS • The amount of contrast media required for CT can be reduced by 50% using generative adversarial networks. • Not only the image quality but especially the pathological consistency must be evaluated to assess safety. • A too pronounced contrast media reduction could influence the pathological consistency in our collective at 80%.
Collapse
Affiliation(s)
- Johannes Haubold
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.
| | - René Hosch
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Lale Umutlu
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Axel Wetter
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Patrizia Haubold
- Department of Diagnostic and Interventional Radiology, Kliniken Essen-Mitte, Essen, Germany
| | | | - Michael Forsting
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Felix Nensa
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Sven Koitka
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.,Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| |
Collapse
|
15
|
Koitka S, Kroll L, Malamutmann E, Oezcelik A, Nensa F. Correction to: Fully automated body composition analysis in routine CT imaging using 3D semantic segmentation convolutional neural networks. Eur Radiol 2020; 31:4402-4403. [PMID: 33245498 PMCID: PMC8128717 DOI: 10.1007/s00330-020-07443-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Affiliation(s)
- Sven Koitka
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.
| | - Lennard Kroll
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Eugen Malamutmann
- Department of General, Visceral and Transplantation Surgery, University Hospital Essen, Essen, Germany
| | - Arzu Oezcelik
- Department of General, Visceral and Transplantation Surgery, University Hospital Essen, Essen, Germany
| | - Felix Nensa
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| |
Collapse
|
16
|
Hosch R, Kroll L, Nensa F, Koitka S. Differentiation Between Anteroposterior and Posteroanterior Chest X-Ray View Position With Convolutional Neural Networks. ROFO-FORTSCHR RONTG 2020; 193:168-176. [PMID: 32615636 DOI: 10.1055/a-1183-5227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
PURPOSE Detection and validation of the chest X-ray view position with use of convolutional neural networks to improve meta-information for data cleaning within a hospital data infrastructure. MATERIAL AND METHODS Within this paper we developed a convolutional neural network which automatically detects the anteroposterior and posteroanterior view position of a chest radiograph. We trained two different network architectures (VGG variant and ResNet-34) with data published by the RSNA (26 684 radiographs, class distribution 46 % AP, 54 % PA) and validated these on a self-compiled dataset with data from the University Hospital Essen (4507, radiographs, class distribution 55 % PA, 45 % AP) labeled by a human reader. For visualization and better understanding of the network predictions, a Grad-CAM was generated for each network decision. The network results were evaluated based on the accuracy, the area under the curve (AUC), and the F1-score against the human reader labels. Also a final performance comparison between model predictions and DICOM labels was performed. RESULTS The ensemble models reached accuracy and F1-scores greater than 95 %. The AUC reaches more than 0.99 for the ensemble models. The Grad-CAMs provide insight as to which anatomical structures contributed to a decision by the networks which are comparable with the ones a radiologist would use. Furthermore, the trained models were able to generalize over mislabeled examples, which was found by comparing the human reader labels to the predicted labels as well as the DICOM labels. CONCLUSION The results show that certain incorrectly entered meta-information of radiological images can be effectively corrected by deep learning in order to increase data quality in clinical application as well as in research. KEY POINTS · The predictions for both view positions are accurate with respect to external validation data.. · The networks based their decisions on anatomical structures and key points that were in-line with prior knowledge and human understanding.. · Final models were able to detect labeling errors within the test dataset.. CITATION FORMAT · Hosch R, Kroll L, Nensa F et al. Differentiation Between Anteroposterior and Posteroanterior Chest X-Ray View Position With Convolutional Neural Networks. Fortschr Röntgenstr 2021; 193: 168 - 176.
Collapse
Affiliation(s)
- René Hosch
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Germany
| | - Lennard Kroll
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Germany
| | - Felix Nensa
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Germany
| | - Sven Koitka
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Germany
| |
Collapse
|
17
|
Koitka S, Kim MS, Qu M, Fischer A, Friedrich CM, Nensa F. Mimicking the radiologists' workflow: Estimating pediatric hand bone age with stacked deep neural networks. Med Image Anal 2020; 64:101743. [PMID: 32540698 DOI: 10.1016/j.media.2020.101743] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Revised: 05/27/2020] [Accepted: 05/28/2020] [Indexed: 11/26/2022]
Abstract
Pediatric endocrinologists regularly order radiographs of the left hand to estimate the degree of bone maturation in order to assess their patients for advanced or delayed growth, physical development, and to monitor consecutive therapeutic measures. The reading of such images is a labor-intensive task that requires a lot of experience and is normally performed by highly trained experts like pediatric radiologists. In this paper we build an automated system for pediatric bone age estimation that mimics and accelerates the workflow of the radiologist without breaking it. The complete system is based on two neural network based models: on the one hand a detector network, which identifies the ossification areas, on the other hand gender and region specific regression networks, which estimate the bone age from the detected areas. With a small annotated dataset an ossification area detection network can be trained, which is stable enough to work as part of a multi-stage approach. Furthermore, our system achieves competitive results on the RSNA Pediatric Bone Age Challenge test set with an average error of 4.56 months. In contrast to other approaches, especially purely encoder-based architectures, our two-stage approach provides self-explanatory results. By detecting and evaluating the individual ossification areas, thus simulating the workflow of the Tanner-Whitehouse procedure, the results are interpretable for a radiologist.
Collapse
Affiliation(s)
- Sven Koitka
- University Hospital Essen, Institute of Diagnostic and Interventional Radiology and Neuroradiology, Hufelandstr. 55, Essen 45147, Germany.
| | - Moon S Kim
- University Hospital Essen, Institute of Diagnostic and Interventional Radiology and Neuroradiology, Hufelandstr. 55, Essen 45147, Germany
| | - Ming Qu
- University of Bonn, Department of Computer Science, Endenicher Allee 19A, Bonn 53115, Germany
| | - Asja Fischer
- Ruhr University Bochum, Department of Mathematics, Universitätsstr. 150, Bochum 44801, Germany
| | - Christoph M Friedrich
- University of Applied Sciences and Arts Dortmund, Department of Computer Science, Emil-Figge-Str. 42, Dortmund 44227, Germany; University Hospital Essen, Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), Hufelandstr. 55, Essen 45147, Germany
| | - Felix Nensa
- University Hospital Essen, Institute of Diagnostic and Interventional Radiology and Neuroradiology, Hufelandstr. 55, Essen 45147, Germany
| |
Collapse
|
18
|
Koitka S, Demircioglu A, Kim MS, Friedrich CM, Nensa F. Ossification area localization in pediatric hand radiographs using deep neural networks for object detection. PLoS One 2018; 13:e0207496. [PMID: 30444906 PMCID: PMC6239319 DOI: 10.1371/journal.pone.0207496] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2018] [Accepted: 10/17/2018] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Detection of ossification areas of hand bones in X-ray images is an important task, e.g. as a preprocessing step in automated bone age estimation. Deep neural networks have emerged recently as de facto standard detection methods, but their drawback is the need of large annotated datasets. Finetuning pre-trained networks is a viable alternative, but it is not clear a priori if training with small annotated datasets will be successful, as it depends on the problem at hand. In this paper, we show that pre-trained networks can be utilized to produce an effective detector of ossification areas in pediatric X-ray images of hands. METHODS AND FINDINGS A publicly available Faster R-CNN network, pre-trained on the COCO dataset, was utilized and finetuned with 240 manually annotated radiographs from the RSNA Pediatric Bone Age Challenge, which comprises over 14.000 pediatric radiographs. The validation is done on another 89 radiographs from the dataset and the performance is measured by Intersection-over-Union (IoU). To understand the effect of the data size on the pre-trained network, subsampling was applied to the training data and the training was repeated. Additionally, the network was trained from scratch without any pre-trained weights. Finally, to understand whether the trained model could be useful, we compared the inference of the network to an annotation of an expert radiologist. The finetuned network was able to achieve an average precision (mAP@0.5IoU) of 92.92 ± 1.93. Apart from the wrist region, all ossification areas were able to benefit from more data. In contrast, the network trained from scratch was not able to produce any correct results. When compared to the annotations of the expert radiologist, the network was able to localize the regions quite well, as the F1-Score was on average 91.85 ± 1.06. CONCLUSIONS By finetuning a pre-trained deep neural network, with 240 annotated radiographs, we were able to successfully detect ossification areas in prediatric hand radiographs.
Collapse
Affiliation(s)
- Sven Koitka
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
- Department of Computer Science, University of Applied Sciences and Arts Dortmund, Dortmund, Germany
- Department of Computer Science, TU Dortmund University, Dortmund, Germany
- * E-mail:
| | - Aydin Demircioglu
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Moon S. Kim
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Christoph M. Friedrich
- Department of Computer Science, University of Applied Sciences and Arts Dortmund, Dortmund, Germany
- Institute for Medical Informatics, Biometry, and Epidemiology (IMIBE), University Hospital Essen, Essen, Germany
| | - Felix Nensa
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| |
Collapse
|
19
|
Koitka S, Friedrich C. nmfgpu4R: GPU-Accelerated Computation of the Non-Negative Matrix Factorization (NMF) Using CUDA Capable Hardware. The R Journal 2016. [DOI: 10.32614/rj-2016-053] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|