1
|
Hörst F, Rempe M, Heine L, Seibold C, Keyl J, Baldini G, Ugurel S, Siveke J, Grünwald B, Egger J, Kleesiek J. CellViT: Vision Transformers for precise cell segmentation and classification. Med Image Anal 2024; 94:103143. [PMID: 38507894 DOI: 10.1016/j.media.2024.103143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 02/14/2024] [Accepted: 03/12/2024] [Indexed: 03/22/2024]
Abstract
Nuclei detection and segmentation in hematoxylin and eosin-stained (H&E) tissue images are important clinical tasks and crucial for a wide range of applications. However, it is a challenging task due to nuclei variances in staining and size, overlapping boundaries, and nuclei clustering. While convolutional neural networks have been extensively used for this task, we explore the potential of Transformer-based networks in combination with large scale pre-training in this domain. Therefore, we introduce a new method for automated instance segmentation of cell nuclei in digitized tissue samples using a deep learning architecture based on Vision Transformer called CellViT. CellViT is trained and evaluated on the PanNuke dataset, which is one of the most challenging nuclei instance segmentation datasets, consisting of nearly 200,000 annotated nuclei into 5 clinically important classes in 19 tissue types. We demonstrate the superiority of large-scale in-domain and out-of-domain pre-trained Vision Transformers by leveraging the recently published Segment Anything Model and a ViT-encoder pre-trained on 104 million histological image patches - achieving state-of-the-art nuclei detection and instance segmentation performance on the PanNuke dataset with a mean panoptic quality of 0.50 and an F1-detection score of 0.83. The code is publicly available at https://github.com/TIO-IKIM/CellViT.
Collapse
Affiliation(s)
- Fabian Hörst
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany.
| | - Moritz Rempe
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Lukas Heine
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Constantin Seibold
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Clinic for Nuclear Medicine, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Julius Keyl
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Institute of Pathology, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Giulia Baldini
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Selma Ugurel
- Department of Dermatology, University Hospital Essen (AöR), 45147 Essen, Germany; German Cancer Consortium (DKTK, Partner site Essen), 69120 Heidelberg, Germany
| | - Jens Siveke
- West German Cancer Center, partner site Essen, a partnership between German Cancer Research Center (DKFZ) and University Hospital Essen, University Hospital Essen (AöR), 45147 Essen, Germany; Bridge Institute of Experimental Tumor Therapy (BIT) and Division of Solid Tumor Translational Oncology (DKTK), West German Cancer Center Essen, University Hospital Essen (AöR), University of Duisburg-Essen, 45147 Essen, Germany
| | - Barbara Grünwald
- Department of Urology, West German Cancer Center, 45147 University Hospital Essen (AöR), Germany; Princess Margaret Cancer Centre, M5G 2M9 Toronto, Ontario, Canada
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany; German Cancer Consortium (DKTK, Partner site Essen), 69120 Heidelberg, Germany; Department of Physics, TU Dortmund University, 44227 Dortmund, Germany
| |
Collapse
|
2
|
Ferreira A, Li J, Pomykala KL, Kleesiek J, Alves V, Egger J. Corrigendum to: GAN-based generation of realistic 3D volumetric data: A systematic review and taxonomy [Medical Image Analysis 93 (2024)]. Med Image Anal 2024:103174. [PMID: 38609775 DOI: 10.1016/j.media.2024.103174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/14/2024]
Affiliation(s)
- André Ferreira
- Institute for AI in Medicine (IKIM), University Hospital Essen, University Duisburg-Essen, Girardetstraße 2, Essen, 45131, Germany; Center Algoritmi/LASI, University of Minho, Braga, 4710-057, Portugal; Computer Algorithms for Medicine Laboratory, Graz, Austria; Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, 52074, Aachen, Germany; Institute of Medical Informatics, University Hospital RWTH Aachen, 52074 Aachen, Germany.
| | - Jianning Li
- Institute for AI in Medicine (IKIM), University Hospital Essen, University Duisburg-Essen, Girardetstraße 2, Essen, 45131, Germany; Computer Algorithms for Medicine Laboratory, Graz, Austria; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany
| | | | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Hospital Essen, University Duisburg-Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany; German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, Essen, 45147, Germany; TU Dortmund University, Department of Physics, Otto-Hahn-Straße 4, 44227 Dortmund, Germany
| | - Victor Alves
- Institute for AI in Medicine (IKIM), University Hospital Essen, University Duisburg-Essen, Girardetstraße 2, Essen, 45131, Germany
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Hospital Essen, University Duisburg-Essen, Girardetstraße 2, Essen, 45131, Germany; Computer Algorithms for Medicine Laboratory, Graz, Austria; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz, 801, Austria
| |
Collapse
|
3
|
Ferreira A, Li J, Pomykala KL, Kleesiek J, Alves V, Egger J. GAN-based generation of realistic 3D volumetric data: A systematic review and taxonomy. Med Image Anal 2024; 93:103100. [PMID: 38340545 DOI: 10.1016/j.media.2024.103100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 11/20/2023] [Accepted: 01/30/2024] [Indexed: 02/12/2024]
Abstract
With the massive proliferation of data-driven algorithms, such as deep learning-based approaches, the availability of high-quality data is of great interest. Volumetric data is very important in medicine, as it ranges from disease diagnoses to therapy monitoring. When the dataset is sufficient, models can be trained to help doctors with these tasks. Unfortunately, there are scenarios where large amounts of data is unavailable. For example, rare diseases and privacy issues can lead to restricted data availability. In non-medical fields, the high cost of obtaining enough high-quality data can also be a concern. A solution to these problems can be the generation of realistic synthetic data using Generative Adversarial Networks (GANs). The existence of these mechanisms is a good asset, especially in healthcare, as the data must be of good quality, realistic, and without privacy issues. Therefore, most of the publications on volumetric GANs are within the medical domain. In this review, we provide a summary of works that generate realistic volumetric synthetic data using GANs. We therefore outline GAN-based methods in these areas with common architectures, loss functions and evaluation metrics, including their advantages and disadvantages. We present a novel taxonomy, evaluations, challenges, and research opportunities to provide a holistic overview of the current state of volumetric GANs.
Collapse
Affiliation(s)
- André Ferreira
- Center Algoritmi/LASI, University of Minho, Braga, 4710-057, Portugal; Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, 52074 Aachen, Germany; Institute of Medical Informatics, University Hospital RWTH Aachen, 52074 Aachen, Germany.
| | - Jianning Li
- Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany.
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany.
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany; German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, Essen, 45147, Germany; TU Dortmund University, Department of Physics, Otto-Hahn-Straße 4, 44227 Dortmund, Germany.
| | - Victor Alves
- Center Algoritmi/LASI, University of Minho, Braga, 4710-057, Portugal.
| | - Jan Egger
- Computer Algorithms for Medicine Laboratory, Graz, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, Essen, 45131, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, Essen, 45147, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz, 801, Austria.
| |
Collapse
|
4
|
Hoffmann H, Funke I, Peters P, Venkatesh DK, Egger J, Rivoir D, Röhrig R, Hölzle F, Bodenstedt S, Willemer MC, Speidel S, Puladi B. AIxSuture: vision-based assessment of open suturing skills. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03093-3. [PMID: 38526613 DOI: 10.1007/s11548-024-03093-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 02/28/2024] [Indexed: 03/27/2024]
Abstract
PURPOSE Efficient and precise surgical skills are essential in ensuring positive patient outcomes. By continuously providing real-time, data driven, and objective evaluation of surgical performance, automated skill assessment has the potential to greatly improve surgical skill training. Whereas machine learning-based surgical skill assessment is gaining traction for minimally invasive techniques, this cannot be said for open surgery skills. Open surgery generally has more degrees of freedom when compared to minimally invasive surgery, making it more difficult to interpret. In this paper, we present novel approaches for skill assessment for open surgery skills. METHODS We analyzed a novel video dataset for open suturing training. We provide a detailed analysis of the dataset and define evaluation guidelines, using state of the art deep learning models. Furthermore, we present novel benchmarking results for surgical skill assessment in open suturing. The models are trained to classify a video into three skill levels based on the global rating score. To obtain initial results for video-based surgical skill classification, we benchmarked a temporal segment network with both an I3D and a Video Swin backbone on this dataset. RESULTS The dataset is composed of 314 videos of approximately five minutes each. Model benchmarking results are an accuracy and F1 score of up to 75 and 72%, respectively. This is similar to the performance achieved by the individual raters, regarding inter-rater agreement and rater variability. We present the first end-to-end trained approach for skill assessment for open surgery training. CONCLUSION We provide a thorough analysis of a new dataset as well as novel benchmarking results for surgical skill assessment. This opens the doors to new advances in skill assessment by enabling video-based skill assessment for classic surgical techniques with the potential to improve the surgical outcome of patients.
Collapse
Affiliation(s)
- Hanna Hoffmann
- Department of Translational Surgical Oncology, NCT/UCC Dresden, Dresden, Germany.
- The Centre for Tactile Internet (CeTI), TUD Dresden University of Technology, Dresden, Germany.
- Faculty of Medicine, University Hospital Carl Gustav Carus, Dresden, Germany.
- BMBF Research Hub 6 G-Life, TUD Dresden University of Technology, Dresden, Germany.
| | - Isabel Funke
- Department of Translational Surgical Oncology, NCT/UCC Dresden, Dresden, Germany
- The Centre for Tactile Internet (CeTI), TUD Dresden University of Technology, Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Philipp Peters
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Danush Kumar Venkatesh
- Department of Translational Surgical Oncology, NCT/UCC Dresden, Dresden, Germany
- School of Embedded Composite Artificial Intelligence (SECAI), TUD Dresden University of Technology, Dresden, Germany
- Faculty of Medicine, University Hospital Carl Gustav Carus, Dresden, Germany
| | - Jan Egger
- Institute for AI in Medicine, University Hospital Essen (AöR), Essen, Germany
| | - Dominik Rivoir
- Department of Translational Surgical Oncology, NCT/UCC Dresden, Dresden, Germany
- The Centre for Tactile Internet (CeTI), TUD Dresden University of Technology, Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Rainer Röhrig
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Sebastian Bodenstedt
- Department of Translational Surgical Oncology, NCT/UCC Dresden, Dresden, Germany
- The Centre for Tactile Internet (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - Marie-Christin Willemer
- MITZ, University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
- Faculty of Medicine, University Hospital Carl Gustav Carus, Dresden, Germany
| | - Stefanie Speidel
- Department of Translational Surgical Oncology, NCT/UCC Dresden, Dresden, Germany
- The Centre for Tactile Internet (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Faculty of Medicine, University Hospital Carl Gustav Carus, Dresden, Germany
- BMBF Research Hub 6 G-Life, TUD Dresden University of Technology, Dresden, Germany
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| |
Collapse
|
5
|
Melchior C, Isfort P, Braunschweig T, Witjes M, Van den Bosch V, Rashad A, Egger J, de la Fuente M, Röhrig R, Hölzle F, Puladi B. Development and validation of a cadaveric porcine Pseudotumor model for Oral Cancer biopsy and resection training. BMC Med Educ 2024; 24:250. [PMID: 38500112 PMCID: PMC10949621 DOI: 10.1186/s12909-024-05224-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 02/23/2024] [Indexed: 03/20/2024]
Abstract
OBJECTIVE The gold standard of oral cancer (OC) treatment is diagnostic confirmation by biopsy followed by surgical treatment. However, studies have shown that dentists have difficulty performing biopsies, dental students lack knowledge about OC, and surgeons do not always maintain a safe margin during tumor resection. To address this, biopsies and resections could be trained under realistic conditions outside the patient. The aim of this study was to develop and to validate a porcine pseudotumor model of the tongue. METHODS An interdisciplinary team reflecting various specialties involved in the oncological treatment of head and neck oncology developed a porcine pseudotumor model of the tongue in which biopsies and resections can be practiced. The refined model was validated in a final trial of 10 participants who each resected four pseudotumors on a tongue, resulting in a total of 40 resected pseudotumors. The participants (7 residents and 3 specialists) had an experience in OC treatment ranging from 0.5 to 27 years. Resection margins (minimum and maximum) were assessed macroscopically and compared beside self-assessed margins and resection time between residents and specialists. Furthermore, the model was evaluated using Likert-type questions on haptic and radiological fidelity, its usefulness as a training model, as well as its imageability using CT and ultrasound. RESULTS The model haptically resembles OC (3.0 ± 0.5; 4-point Likert scale), can be visualized with medical imaging and macroscopically evaluated immediately after resection providing feedback. Although, participants (3.2 ± 0.4) tended to agree that they had resected the pseudotumor with an ideal safety margin (10 mm), the mean minimum resection margin was insufficient at 4.2 ± 1.2 mm (mean ± SD), comparable to reported margins in literature. Simultaneously, a maximum resection margin of 18.4 ± 6.1 mm was measured, indicating partial over-resection. Although specialists were faster at resection (p < 0.001), this had no effect on margins (p = 0.114). Overall, the model was well received by the participants, and they could see it being implemented in training (3.7 ± 0.5). CONCLUSION The model, which is cost-effective, cryopreservable, and provides a risk-free training environment, is ideal for training in OC biopsy and resection and could be incorporated into dental, medical, or oncologic surgery curricula. Future studies should evaluate the long-term training effects using this model and its potential impact on improving patient outcomes.
Collapse
Affiliation(s)
- Claire Melchior
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Peter Isfort
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, 52074, Aachen, Germany
| | - Till Braunschweig
- Institute of Pathology, RWTH Aachen University, 52074, Aachen, Germany
- Institute of Pathology, Faculty of Medicine, Ludwig Maximilians University (LMU), 80337, Munich, Germany
| | - Max Witjes
- Department of Oral and Maxillofacial Surgery, UMCG Groningen, 9713, GZ, Groningen, The Netherlands
| | - Vincent Van den Bosch
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, 52074, Aachen, Germany
| | - Ashkan Rashad
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Jan Egger
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), 45147, Essen, Germany
- Institute of Artificial Intelligence in Medicine, Essen University Hospital, 45131, Essen, Germany
| | - Matías de la Fuente
- Chair of Medical Engineering, RWTH Aachen University, 52074, Aachen, Germany
| | - Rainer Röhrig
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany.
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany.
| |
Collapse
|
6
|
Li J, Dada A, Puladi B, Kleesiek J, Egger J. ChatGPT in healthcare: A taxonomy and systematic review. Comput Methods Programs Biomed 2024; 245:108013. [PMID: 38262126 DOI: 10.1016/j.cmpb.2024.108013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 12/29/2023] [Accepted: 01/08/2024] [Indexed: 01/25/2024]
Abstract
The recent release of ChatGPT, a chat bot research project/product of natural language processing (NLP) by OpenAI, stirs up a sensation among both the general public and medical professionals, amassing a phenomenally large user base in a short time. This is a typical example of the 'productization' of cutting-edge technologies, which allows the general public without a technical background to gain firsthand experience in artificial intelligence (AI), similar to the AI hype created by AlphaGo (DeepMind Technologies, UK) and self-driving cars (Google, Tesla, etc.). However, it is crucial, especially for healthcare researchers, to remain prudent amidst the hype. This work provides a systematic review of existing publications on the use of ChatGPT in healthcare, elucidating the 'status quo' of ChatGPT in medical applications, for general readers, healthcare professionals as well as NLP scientists. The large biomedical literature database PubMed is used to retrieve published works on this topic using the keyword 'ChatGPT'. An inclusion criterion and a taxonomy are further proposed to filter the search results and categorize the selected publications, respectively. It is found through the review that the current release of ChatGPT has achieved only moderate or 'passing' performance in a variety of tests, and is unreliable for actual clinical deployment, since it is not intended for clinical applications by design. We conclude that specialized NLP models trained on (bio)medical datasets still represent the right direction to pursue for critical clinical applications.
Collapse
Affiliation(s)
- Jianning Li
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Girardetstraße 2, 45131 Essen, Germany
| | - Amin Dada
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Girardetstraße 2, 45131 Essen, Germany
| | - Behrus Puladi
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany; Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Girardetstraße 2, 45131 Essen, Germany; TU Dortmund University, Department of Physics, Otto-Hahn-Straße 4, 44227 Dortmund, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine, University Hospital Essen (AöR), Girardetstraße 2, 45131 Essen, Germany; Center for Virtual and Extended Reality in Medicine (ZvRM), University Hospital Essen, University Medicine Essen, Hufelandstraße 55, 45147 Essen, Germany.
| |
Collapse
|
7
|
Moors JJE, Xu Z, Xie K, Rashad A, Egger J, Röhrig R, Hölzle F, Puladi B. Full-thickness skin graft versus split-thickness skin graft for radial forearm free flap donor site closure: protocol for a systematic review and meta-analysis. Syst Rev 2024; 13:74. [PMID: 38409059 PMCID: PMC10895847 DOI: 10.1186/s13643-024-02471-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 01/27/2024] [Indexed: 02/28/2024] Open
Abstract
BACKGROUND The radial forearm free flap (RFFF) serves as a workhorse for a variety of reconstructions. Although there are a variety of surgical techniques for donor site closure after RFFF raising, the most common techniques are closure using a split-thickness skin graft (STSG) or a full-thickness skin graft (FTSG). The closure can result in wound complications and function and aesthetic compromise of the forearm and hand. The aim of the planned systematic review and meta-analysis is to compare the wound-related, function-related and aesthetics-related outcome associated with full-thickness skin grafts (FTSG) and split-thickness skin grafts (STSG) in radial forearm free flap (RFFF) donor site closure. METHODS A systematic review and meta-analysis will be conducted. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines will be followed. Electronic databases and platforms (PubMed, Embase, Scopus, Web of Science, Cochrane Central Register of Controlled Trials (CENTRAL), China National Knowledge Infrastructure (CNKI)) and clinical trial registries (ClinicalTrials.gov, the German Clinical Trials Register, the ISRCTN registry, the International Clinical Trials Registry Platform) will be searched using predefined search terms until 15 January 2024. A rerun of the search will be carried out within 12 months before publication of the review. Eligible studies should report on the occurrence of donor site complications after raising an RFFF and closure of the defect. Included closure techniques are techniques that use full-thickness skin grafts and split-thickness skin grafts. Excluded techniques for closure are primary wound closure without the use of skin graft. Outcomes are considered wound-, functional-, and aesthetics-related. Studies that will be included are randomized controlled trials (RCTs) and prospective and retrospective comparative cohort studies. Case-control studies, studies without a control group, animal studies and cadaveric studies will be excluded. Screening will be performed in a blinded fashion by two reviewers per study. A third reviewer resolves discrepancies. The risk of bias in the original studies will be assessed using the ROBINS-I and RoB 2 tools. Data synthesis will be done using Review Manager (RevMan) 5.4.1. If appropriate, a meta-analysis will be conducted. Between-study variability will be assessed using the I2 index. If necessary, R will be used. The quality of evidence for outcomes will eventually be assessed using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. DISCUSSION This study's findings may help us understand both closure techniques' complication rates and may have important implications for developing future guidelines for RFFF donor site management. If available data is limited and several questions remain unanswered, additional comparative studies will be needed. SYSTEMATIC REVIEW REGISTRATION The protocol was developed in line with the PRISMA-P extension for protocols and was registered with the International Prospective Register of Systematic Reviews (PROSPERO) on 17 September 2023 (registration number CRD42023351903).
Collapse
Affiliation(s)
- Jasper J E Moors
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, Aachen, 52074, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, 52074, Aachen, Germany
| | - Zhibin Xu
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, Aachen, 52074, Germany
| | - Kunpeng Xie
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, Aachen, 52074, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, 52074, Aachen, Germany
| | - Ashkan Rashad
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, Aachen, 52074, Germany
| | - Jan Egger
- Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen (WTZ), 45122, Essen, Germany
- Institute of Artificial Intelligence in Medicine, Essen University Hospital, 45131, Essen, Germany
| | - Rainer Röhrig
- Institute of Medical Informatics, University Hospital RWTH Aachen, 52074, Aachen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, Aachen, 52074, Germany
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, Aachen, 52074, Germany.
- Institute of Medical Informatics, University Hospital RWTH Aachen, 52074, Aachen, Germany.
| |
Collapse
|
8
|
Puladi B, Gsaxner C, Kleesiek J, Hölzle F, Röhrig R, Egger J. Response to the comment on "The impact and opportunities of large language models like ChatGPT in oral and maxillofacial surgery: a narrative review". Int J Oral Maxillofac Surg 2024:S0901-5027(24)00012-2. [PMID: 38310049 DOI: 10.1016/j.ijom.2023.12.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 11/29/2023] [Accepted: 12/06/2023] [Indexed: 02/05/2024]
Affiliation(s)
- B Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany; Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - C Gsaxner
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany; Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria; Department of Oral and Maxillofacial Surgery, Medical University of Graz, Graz, Austria
| | - J Kleesiek
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - F Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - R Röhrig
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - J Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria; Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany; Center for Virtual and Extended Reality in Medicine (ZvRM), Essen University Hospital (AöR), Essen, Germany.
| |
Collapse
|
9
|
Melzig C, Hartmann S, Steuwe A, Egger J, Do TD, Geisbüsch P, Kauczor HU, Rengier F, Fink MA. BMI-Adapted Double Low-Dose Dual-Source Aortic CT for Endoleak Detection after Endovascular Repair: A Prospective Intra-Individual Diagnostic Accuracy Study. Diagnostics (Basel) 2024; 14:280. [PMID: 38337796 PMCID: PMC10855180 DOI: 10.3390/diagnostics14030280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 01/19/2024] [Accepted: 01/25/2024] [Indexed: 02/12/2024] Open
Abstract
PURPOSE To assess the diagnostic accuracy of BMI-adapted, low-radiation and low-iodine dose, dual-source aortic CT for endoleak detection in non-obese and obese patients following endovascular aortic repair. METHODS In this prospective single-center study, patients referred for follow-up CT after endovascular repair with a history of at least one standard triphasic (native, arterial and delayed phase) routine CT protocol were enrolled. Patients were divided into two groups and allocated to a BMI-adapted (group A, BMI < 30 kg/m2; group B, BMI ≥ 30 kg/m2) double low-dose CT (DLCT) protocol comprising single-energy arterial and dual-energy delayed phase series with virtual non-contrast (VNC) reconstructions. An in-patient comparison of the DLCT and routine CT protocol as reference standard was performed regarding differences in diagnostic accuracy, radiation dose, and image quality. RESULTS Seventy-five patients were included in the study (mean age 73 ± 8 years, 63 (84%) male). Endoleaks were diagnosed in 20 (26.7%) patients, 11 of 53 (20.8%) in group A and 9 of 22 (40.9%) in group B. Two radiologists achieved an overall diagnostic accuracy of 98.7% and 97.3% for endoleak detection, with 100% in group A and 95.5% and 90.9% in group B. All examinations were diagnostic. The DLCT protocol reduced the effective dose from 10.0 ± 3.6 mSv to 6.1 ± 1.5 mSv (p < 0.001) and the total iodine dose from 31.5 g to 14.5 g in group A and to 17.4 g in group B. CONCLUSION Optimized double low-dose dual-source aortic CT with VNC, arterial and delayed phase images demonstrated high diagnostic accuracy for endoleak detection and significant radiation and iodine dose reductions in both obese and non-obese patients compared to the reference standard of triple phase, standard radiation and iodine dose aortic CT.
Collapse
Affiliation(s)
- Claudius Melzig
- Clinic for Diagnostic and Interventional Radiology, Heidelberg University Hospital, 69120 Heidelberg, Germany
| | - Sibylle Hartmann
- Clinic for Diagnostic and Interventional Radiology, Heidelberg University Hospital, 69120 Heidelberg, Germany
| | - Andrea Steuwe
- Clinic for Diagnostic and Interventional Radiology, Heidelberg University Hospital, 69120 Heidelberg, Germany
- Department of Diagnostic and Interventional Radiology, Medical Faculty and University Hospital, Heinrich Heine University Düsseldorf, 40225 Düsseldorf, Germany
| | - Jan Egger
- Institute for AI in Medicine, University Medicine Essen, 45147 Essen, Germany
| | - Thuy D. Do
- Clinic for Diagnostic and Interventional Radiology, Heidelberg University Hospital, 69120 Heidelberg, Germany
| | - Philipp Geisbüsch
- Department of Vascular and Endovascular Surgery, Heidelberg University Hospital, 69120 Heidelberg, Germany
- Department of Vascular and Endovascular Surgery, Klinikum Stuttgart, Katharinenhospital, 70199 Stuttgart, Germany
| | - Hans-Ulrich Kauczor
- Clinic for Diagnostic and Interventional Radiology, Heidelberg University Hospital, 69120 Heidelberg, Germany
| | - Fabian Rengier
- Clinic for Diagnostic and Interventional Radiology, Heidelberg University Hospital, 69120 Heidelberg, Germany
| | - Matthias A. Fink
- Clinic for Diagnostic and Interventional Radiology, Heidelberg University Hospital, 69120 Heidelberg, Germany
| |
Collapse
|
10
|
Dada A, Ufer TL, Kim M, Hasin M, Spieker N, Forsting M, Nensa F, Egger J, Kleesiek J. Information extraction from weakly structured radiological reports with natural language queries. Eur Radiol 2024; 34:330-337. [PMID: 37505252 DOI: 10.1007/s00330-023-09977-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 05/08/2023] [Accepted: 05/27/2023] [Indexed: 07/29/2023]
Abstract
OBJECTIVES Provide physicians and researchers an efficient way to extract information from weakly structured radiology reports with natural language processing (NLP) machine learning models. METHODS We evaluate seven different German bidirectional encoder representations from transformers (BERT) models on a dataset of 857,783 unlabeled radiology reports and an annotated reading comprehension dataset in the format of SQuAD 2.0 based on 1223 additional reports. RESULTS Continued pre-training of a BERT model on the radiology dataset and a medical online encyclopedia resulted in the most accurate model with an F1-score of 83.97% and an exact match score of 71.63% for answerable questions and 96.01% accuracy in detecting unanswerable questions. Fine-tuning a non-medical model without further pre-training led to the lowest-performing model. The final model proved stable against variation in the formulations of questions and in dealing with questions on topics excluded from the training set. CONCLUSIONS General domain BERT models further pre-trained on radiological data achieve high accuracy in answering questions on radiology reports. We propose to integrate our approach into the workflow of medical practitioners and researchers to extract information from radiology reports. CLINICAL RELEVANCE STATEMENT By reducing the need for manual searches of radiology reports, radiologists' resources are freed up, which indirectly benefits patients. KEY POINTS • BERT models pre-trained on general domain datasets and radiology reports achieve high accuracy (83.97% F1-score) on question-answering for radiology reports. • The best performing model achieves an F1-score of 83.97% for answerable questions and 96.01% accuracy for questions without an answer. • Additional radiology-specific pretraining of all investigated BERT models improves their performance.
Collapse
Affiliation(s)
- Amin Dada
- Institute of AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany.
| | - Tim Leon Ufer
- Institute of AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Moon Kim
- Institute of AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Max Hasin
- Institute of AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | | | - Michael Forsting
- Institute of AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Felix Nensa
- Institute of AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Jan Egger
- Institute of AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Essen, Germany
| | - Jens Kleesiek
- Institute of AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
- Dr. Krüger MVZ GmbH, Bocholt, Germany
- German Cancer Consortium (DKTK), Partner Site Essen, Essen, Germany
| |
Collapse
|
11
|
Rempe M, Mentzel F, Pomykala KL, Haubold J, Nensa F, Kroeninger K, Egger J, Kleesiek J. k-strip: A novel segmentation algorithm in k-space for the application of skull stripping. Comput Methods Programs Biomed 2024; 243:107912. [PMID: 37981454 DOI: 10.1016/j.cmpb.2023.107912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 10/30/2023] [Accepted: 11/02/2023] [Indexed: 11/21/2023]
Abstract
BACKGROUND AND OBJECTIVE We present a novel deep learning-based skull stripping algorithm for magnetic resonance imaging (MRI) that works directly in the information rich complex valued k-space. METHODS Using four datasets from different institutions with a total of around 200,000 MRI slices, we show that our network can perform skull-stripping on the raw data of MRIs while preserving the phase information which no other skull stripping algorithm is able to work with. For two of the datasets, skull stripping performed by HD-BET (Brain Extraction Tool) in the image domain is used as the ground truth, whereas the third and fourth dataset comes with per-hand annotated brain segmentations. RESULTS All four datasets were very similar to the ground truth (DICE scores of 92 %-99 % and Hausdorff distances of under 5.5 pixel). Results on slices above the eye-region reach DICE scores of up to 99 %, whereas the accuracy drops in regions around the eyes and below, with partially blurred output. The output of k-Strip often has smoothed edges at the demarcation to the skull. Binary masks are created with an appropriate threshold. CONCLUSION With this proof-of-concept study, we were able to show the feasibility of working in the k-space frequency domain, preserving phase information, with consistent results. Besides preserving valuable information for further diagnostics, this approach makes an immediate anonymization of patient data possible, already before being transformed into the image domain. Future research should be dedicated to discovering additional ways the k-space can be used for innovative image analysis and further workflows.
Collapse
Affiliation(s)
- Moritz Rempe
- The Institute for AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, Essen 45131, Germany; Otto-Hahn-Straße 4a, Department of Physics of the Technical University Dortmund, Dortmund 44227, Germany
| | - Florian Mentzel
- Otto-Hahn-Straße 4a, Department of Physics of the Technical University Dortmund, Dortmund 44227, Germany
| | - Kelsey L Pomykala
- The Institute for AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, Essen 45131, Germany
| | - Johannes Haubold
- The Institute for AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, Essen 45131, Germany
| | - Felix Nensa
- The Institute for AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, Essen 45131, Germany
| | - Kevin Kroeninger
- Otto-Hahn-Straße 4a, Department of Physics of the Technical University Dortmund, Dortmund 44227, Germany
| | - Jan Egger
- The Institute for AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, Essen 45131, Germany; The Computer Algorithms for Medicine Laboratory, Graz, Austria; The Institute of Computer Graphics and Vision, Inffeldgasse 16, Graz University of Technology, Graz 8010, Austria; Cancer Research Center Cologne Essen (CCCE), Hufelandstraße 55, University Medicine Essen, Essen 45147, Germany
| | - Jens Kleesiek
- The Institute for AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, Essen 45131, Germany; Cancer Research Center Cologne Essen (CCCE), Hufelandstraße 55, University Medicine Essen, Essen 45147, Germany; Partner Site Essen, Hufelandstraße 55, German Cancer Consortium (DKTK), Essen 45147, Germany.
| |
Collapse
|
12
|
Puladi B, Gsaxner C, Kleesiek J, Hölzle F, Röhrig R, Egger J. The impact and opportunities of large language models like ChatGPT in oral and maxillofacial surgery: a narrative review. Int J Oral Maxillofac Surg 2024; 53:78-88. [PMID: 37798200 DOI: 10.1016/j.ijom.2023.09.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 09/14/2023] [Accepted: 09/19/2023] [Indexed: 10/07/2023]
Abstract
Since its release at the end of 2022, the social response to ChatGPT, a large language model (LLM), has been huge, as it has revolutionized the way we communicate with computers. This review was performed to describe the technical background of LLMs and to provide a review of the current literature on LLMs in the field of oral and maxillofacial surgery (OMS). The PubMed, Scopus, and Web of Science databases were searched for LLMs and OMS. Adjacent surgical disciplines were included to cover the entire literature, and records from Google Scholar and medRxiv were added. Out of the 57 records identified, 37 were included; 31 (84%) were related to GPT-3.5, four (11%) to GPT-4, and two (5%) to both. Current research on LLMs is mainly limited to research and scientific writing, patient information/communication, and medical education. Classic OMS diseases are underrepresented. The current literature related to LLMs in OMS has a limited evidence level. There is a need to investigate the use of LLMs scientifically and systematically in the core areas of OMS. Although LLMs are likely to add value outside the operating room, the use of LLMs raises ethical and medical regulatory issues that must first be addressed.
Collapse
Affiliation(s)
- B Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany; Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - C Gsaxner
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany; Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria; Department of Oral and Maxillofacial Surgery, Medical University of Graz, Graz, Austria
| | - J Kleesiek
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - F Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - R Röhrig
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - J Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria; Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany.
| |
Collapse
|
13
|
Li J, Gsaxner C, Pepe A, Schmalstieg D, Kleesiek J, Egger J. Sparse convolutional neural network for high-resolution skull shape completion and shape super-resolution. Sci Rep 2023; 13:20229. [PMID: 37981641 PMCID: PMC10658170 DOI: 10.1038/s41598-023-47437-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 11/14/2023] [Indexed: 11/21/2023] Open
Abstract
Traditional convolutional neural network (CNN) methods rely on dense tensors, which makes them suboptimal for spatially sparse data. In this paper, we propose a CNN model based on sparse tensors for efficient processing of high-resolution shapes represented as binary voxel occupancy grids. In contrast to a dense CNN that takes the entire voxel grid as input, a sparse CNN processes only on the non-empty voxels, thus reducing the memory and computation overhead caused by the sparse input data. We evaluate our method on two clinically relevant skull reconstruction tasks: (1) given a defective skull, reconstruct the complete skull (i.e., skull shape completion), and (2) given a coarse skull, reconstruct a high-resolution skull with fine geometric details (shape super-resolution). Our method outperforms its dense CNN-based counterparts in the skull reconstruction task quantitatively and qualitatively, while requiring substantially less memory for training and inference. We observed that, on the 3D skull data, the overall memory consumption of the sparse CNN grows approximately linearly during inference with respect to the image resolutions. During training, the memory usage remains clearly below increases in image resolution-an [Formula: see text] increase in voxel number leads to less than [Formula: see text] increase in memory requirements. Our study demonstrates the effectiveness of using a sparse CNN for skull reconstruction tasks, and our findings can be applied to other spatially sparse problems. We prove this by additional experimental results on other sparse medical datasets, like the aorta and the heart. Project page at https://github.com/Jianningli/SparseCNN .
Collapse
Affiliation(s)
- Jianning Li
- Institute for AI in Medicine (IKIM), University Medicine Essen (AöR), Girardetstraße 2, 45131, Essen, Germany.
| | - Christina Gsaxner
- Institute of computer graphics and vision, Graz University of Technology, Graz, Austria
| | - Antonio Pepe
- Institute of computer graphics and vision, Graz University of Technology, Graz, Austria
| | - Dieter Schmalstieg
- Institute of computer graphics and vision, Graz University of Technology, Graz, Austria
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen (AöR), Girardetstraße 2, 45131, Essen, Germany
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Medicine Essen (AöR), Girardetstraße 2, 45131, Essen, Germany.
- Institute of computer graphics and vision, Graz University of Technology, Graz, Austria.
| |
Collapse
|
14
|
Luijten G, Gsaxner C, Li J, Pepe A, Ambigapathy N, Kim M, Chen X, Kleesiek J, Hölzle F, Puladi B, Egger J. 3D surgical instrument collection for computer vision and extended reality. Sci Data 2023; 10:796. [PMID: 37951957 PMCID: PMC10640540 DOI: 10.1038/s41597-023-02684-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 10/23/2023] [Indexed: 11/14/2023] Open
Abstract
The availability of computational hardware and developments in (medical) machine learning (MML) increases medical mixed realities' (MMR) clinical usability. Medical instruments have played a vital role in surgery for ages. To further accelerate the implementation of MML and MMR, three-dimensional (3D) datasets of instruments should be publicly available. The proposed data collection consists of 103, 3D-scanned medical instruments from the clinical routine, scanned with structured light scanners. The collection consists, for example, of instruments, like retractors, forceps, and clamps. The collection can be augmented by generating likewise models using 3D software, resulting in an inflated dataset for analysis. The collection can be used for general instrument detection and tracking in operating room settings, or a freeform marker-less instrument registration for tool tracking in augmented reality. Furthermore, for medical simulation or training scenarios in virtual reality and medical diminishing reality in mixed reality. We hope to ease research in the field of MMR and MML, but also to motivate the release of a wider variety of needed surgical instrument datasets.
Collapse
Affiliation(s)
- Gijs Luijten
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Inffeldgasse 16/II, 8010, Graz, Austria
- Institute for Artificial Intelligence in Medicine (IKIM), Essen University Hospital (AöR), Girardetstraße 2, 45131, Essen, Germany
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Inffeldgasse 16/II, 8010, Graz, Austria
| | - Jianning Li
- Institute for Artificial Intelligence in Medicine (IKIM), Essen University Hospital (AöR), Girardetstraße 2, 45131, Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Inffeldgasse 16/II, 8010, Graz, Austria
| | - Narmada Ambigapathy
- Institute for Artificial Intelligence in Medicine (IKIM), Essen University Hospital (AöR), Girardetstraße 2, 45131, Essen, Germany
| | - Moon Kim
- Institute for Artificial Intelligence in Medicine (IKIM), Essen University Hospital (AöR), Girardetstraße 2, 45131, Essen, Germany
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, 200240, Shanghai, People's Republic of China
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), Essen University Hospital (AöR), Girardetstraße 2, 45131, Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen (WTZ), 45122, Essen, Germany
- Technische Universität Dortmund, Fakultät Physik, Otto-Hahn-Straße 4, 44227, Dortmund, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany.
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Germany.
| | - Jan Egger
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Inffeldgasse 16/II, 8010, Graz, Austria.
- Institute for Artificial Intelligence in Medicine (IKIM), Essen University Hospital (AöR), Girardetstraße 2, 45131, Essen, Germany.
- Center for Virtual and Extended Reality in Medicine (ZvRM), University Hospital Essen, Hufelandstraße 55, North Rhine-Westphalia, 45147, Essen, Germany.
| |
Collapse
|
15
|
Fijačko N, Metličar Š, Kleesiek J, Egger J, Chang TP. Virtual Reality, Augmented Reality, Augmented Virtuality, or Mixed Reality in cardiopulmonary resuscitation: Which Extended Reality am I using for teaching adult basic life support? Resuscitation 2023; 192:109973. [PMID: 37730097 DOI: 10.1016/j.resuscitation.2023.109973] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 09/13/2023] [Indexed: 09/22/2023]
Affiliation(s)
- Nino Fijačko
- University of Maribor, Faculty of Health Sciences, Maribor, Slovenia; ERC Research Net, Niels, Belgium; Maribor University Medical Centre, Maribor, Slovenia.
| | - Špela Metličar
- University of Maribor, Faculty of Health Sciences, Maribor, Slovenia; Medical Dispatch Centre Maribor, University Clinical Centre Ljubljana, Ljubljana, Slovenia
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine, Essen University Hospital, Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, Essen, Germany; Department of Physics, TU Dortmund University, Dortmund, Germany; German Cancer Consortium, Essen, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine, Essen University Hospital, Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, Essen, Germany; Center for Virtual and Extended Reality in Medicine, Essen University Hospital, Essen, Germany
| | - Todd P Chang
- Children's Hospital Los Angeles, Las Madrinas Simulation Center, Los Angeles, CA, USA
| |
Collapse
|
16
|
Strack C, Pomykala KL, Schlemmer HP, Egger J, Kleesiek J. "A net for everyone": fully personalized and unsupervised neural networks trained with longitudinal data from a single patient. BMC Med Imaging 2023; 23:174. [PMID: 37907876 PMCID: PMC10619304 DOI: 10.1186/s12880-023-01128-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 10/16/2023] [Indexed: 11/02/2023] Open
Abstract
BACKGROUND With the rise in importance of personalized medicine and deep learning, we combine the two to create personalized neural networks. The aim of the study is to show a proof of concept that data from just one patient can be used to train deep neural networks to detect tumor progression in longitudinal datasets. METHODS Two datasets with 64 scans from 32 patients with glioblastoma multiforme (GBM) were evaluated in this study. The contrast-enhanced T1w sequences of brain magnetic resonance imaging (MRI) images were used. We trained a neural network for each patient using just two scans from different timepoints to map the difference between the images. The change in tumor volume can be calculated with this map. The neural networks were a form of a Wasserstein-GAN (generative adversarial network), an unsupervised learning architecture. The combination of data augmentation and the network architecture allowed us to skip the co-registration of the images. Furthermore, no additional training data, pre-training of the networks or any (manual) annotations are necessary. RESULTS The model achieved an AUC-score of 0.87 for tumor change. We also introduced a modified RANO criteria, for which an accuracy of 66% can be achieved. CONCLUSIONS We show a novel approach to deep learning in using data from just one patient to train deep neural networks to monitor tumor change. Using two different datasets to evaluate the results shows the potential to generalize the method.
Collapse
Affiliation(s)
- Christian Strack
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Girardetstraße 2, 45131, Essen, Germany.
- Division of Radiology, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany.
- Medical Faculty Heidelberg, Heidelberg University, 69120, Heidelberg, Germany.
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Girardetstraße 2, 45131, Essen, Germany
| | - Heinz-Peter Schlemmer
- Division of Radiology, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
- German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Girardetstraße 2, 45131, Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Girardetstraße 2, 45131, Essen, Germany
- German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, 45147, Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147, Essen, Germany
- Department of Physics, TU Dortmund University, Otto-Hahn-Straße 4, D-44227, Dortmund, Germany
| |
Collapse
|
17
|
Pepe A, Egger J, Codari M, Willemink MJ, Gsaxner C, Li J, Roth PM, Schmalstieg D, Mistelbauer G, Fleischmann D. Automated cross-sectional view selection in CT angiography of aortic dissections with uncertainty awareness and retrospective clinical annotations. Comput Biol Med 2023; 165:107365. [PMID: 37647783 DOI: 10.1016/j.compbiomed.2023.107365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 07/20/2023] [Accepted: 08/12/2023] [Indexed: 09/01/2023]
Abstract
Surveillance imaging of patients with chronic aortic diseases, such as aneurysms and dissections, relies on obtaining and comparing cross-sectional diameter measurements along the aorta at predefined aortic landmarks, over time. The orientation of the cross-sectional measuring planes at each landmark is currently defined manually by highly trained operators. Centerline-based approaches are unreliable in patients with chronic aortic dissection, because of the asymmetric flow channels, differences in contrast opacification, and presence of mural thrombus, making centerline computations or measurements difficult to generate and reproduce. In this work, we present three alternative approaches - INS, MCDS, MCDbS - based on convolutional neural networks and uncertainty quantification methods to predict the orientation (ϕ,θ) of such cross-sectional planes. For the monitoring of chronic aortic dissections, we show how a dataset of 162 CTA volumes with overall 3273 imperfect manual annotations routinely collected in a clinic can be efficiently used to accomplish this task, despite the presence of non-negligible interoperator variabilities in terms of mean absolute error (MAE) and 95% limits of agreement (LOA). We show how, despite the large limits of agreement in the training data, the trained model provides faster and more reproducible results than either an expert user or a centerline method. The remaining disagreement lies within the variability produced by three independent expert annotators and matches the current state of the art, providing a similar error, but in a fraction of the time.
Collapse
Affiliation(s)
- Antonio Pepe
- Graz University of Technology, Institute of Computer Graphics and Vision, Inffeldgasse 16/II, 8010 Graz, Austria; Stanford University, School of Medicine, 3D and Quantitative Imaging Lab, 300 Pasteur Drive Stanford, CA 94305, USA; Computer Algorithms for Médicine (Café) Laboratory, Graz, Austria.
| | - Jan Egger
- Computer Algorithms for Médicine (Café) Laboratory, Graz, Austria; University Medicine Essen, Institute for AI in Medicine (IKIM), Girardetstraße 2, 45131 Essen, Germany.
| | - Marina Codari
- Stanford University, School of Medicine, 3D and Quantitative Imaging Lab, 300 Pasteur Drive Stanford, CA 94305, USA.
| | - Martin J Willemink
- Stanford University, School of Medicine, 3D and Quantitative Imaging Lab, 300 Pasteur Drive Stanford, CA 94305, USA.
| | - Christina Gsaxner
- Graz University of Technology, Institute of Computer Graphics and Vision, Inffeldgasse 16/II, 8010 Graz, Austria; Computer Algorithms for Médicine (Café) Laboratory, Graz, Austria.
| | - Jianning Li
- Computer Algorithms for Médicine (Café) Laboratory, Graz, Austria; University Medicine Essen, Institute for AI in Medicine (IKIM), Girardetstraße 2, 45131 Essen, Germany.
| | - Peter M Roth
- Graz University of Technology, Institute of Computer Graphics and Vision, Inffeldgasse 16/II, 8010 Graz, Austria.
| | - Dieter Schmalstieg
- Graz University of Technology, Institute of Computer Graphics and Vision, Inffeldgasse 16/II, 8010 Graz, Austria.
| | - Gabriel Mistelbauer
- Stanford University, School of Medicine, 3D and Quantitative Imaging Lab, 300 Pasteur Drive Stanford, CA 94305, USA.
| | - Dominik Fleischmann
- Stanford University, School of Medicine, 3D and Quantitative Imaging Lab, 300 Pasteur Drive Stanford, CA 94305, USA.
| |
Collapse
|
18
|
Hörst F, Ting S, Liffers ST, Pomykala KL, Steiger K, Albertsmeier M, Angele MK, Lorenzen S, Quante M, Weichert W, Egger J, Siveke JT, Kleesiek J. Histology-Based Prediction of Therapy Response to Neoadjuvant Chemotherapy for Esophageal and Esophagogastric Junction Adenocarcinomas Using Deep Learning. JCO Clin Cancer Inform 2023; 7:e2300038. [PMID: 37527475 DOI: 10.1200/cci.23.00038] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/27/2023] [Accepted: 06/07/2023] [Indexed: 08/03/2023] Open
Abstract
PURPOSE Quantifying treatment response to gastroesophageal junction (GEJ) adenocarcinomas is crucial to provide an optimal therapeutic strategy. Routinely taken tissue samples provide an opportunity to enhance existing positron emission tomography-computed tomography (PET/CT)-based therapy response evaluation. Our objective was to investigate if deep learning (DL) algorithms are capable of predicting the therapy response of patients with GEJ adenocarcinoma to neoadjuvant chemotherapy on the basis of histologic tissue samples. METHODS This diagnostic study recruited 67 patients with I-III GEJ adenocarcinoma from the multicentric nonrandomized MEMORI trial including three German university hospitals TUM (University Hospital Rechts der Isar, Munich), LMU (Hospital of the Ludwig-Maximilians-University, Munich), and UME (University Hospital Essen, Essen). All patients underwent baseline PET/CT scans and esophageal biopsy before and 14-21 days after treatment initiation. Treatment response was defined as a ≥35% decrease in SUVmax from baseline. Several DL algorithms were developed to predict PET/CT-based responders and nonresponders to neoadjuvant chemotherapy using digitized histopathologic whole slide images (WSIs). RESULTS The resulting models were trained on TUM (n = 25 pretherapy, n = 47 on-therapy) patients and evaluated on our internal validation cohort from LMU and UME (n = 17 pretherapy, n = 15 on-therapy). Compared with multiple architectures, the best pretherapy network achieves an area under the receiver operating characteristic curve (AUROC) of 0.81 (95% CI, 0.61 to 1.00), an area under the precision-recall curve (AUPRC) of 0.82 (95% CI, 0.61 to 1.00), a balanced accuracy of 0.78 (95% CI, 0.60 to 0.94), and a Matthews correlation coefficient (MCC) of 0.55 (95% CI, 0.18 to 0.88). The best on-therapy network achieves an AUROC of 0.84 (95% CI, 0.64 to 1.00), an AUPRC of 0.82 (95% CI, 0.56 to 1.00), a balanced accuracy of 0.80 (95% CI, 0.65 to 1.00), and a MCC of 0.71 (95% CI, 0.38 to 1.00). CONCLUSION Our results show that DL algorithms can predict treatment response to neoadjuvant chemotherapy using WSI with high accuracy even before therapy initiation, suggesting the presence of predictive morphologic tissue biomarkers.
Collapse
Affiliation(s)
- Fabian Hörst
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
| | - Saskia Ting
- Institute of Pathology, University Hospital Essen (AöR), University of Duisburg-Essen, Essen, Germany
- Current address: Institute of Pathology Nordhessen, Kassel, Germany
| | - Sven-Thorsten Liffers
- Bridge Institute of Experimental Tumor Therapy, West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
- Division of Solid Tumor Translational Oncology, German Cancer Consortium (DKTK, Partner site Essen) and German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Kelsey L Pomykala
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Katja Steiger
- Institute of Pathology, Technical University of Munich (TUM), Munich, Germany
| | - Markus Albertsmeier
- Department of General, Visceral and Transplantation Surgery, LMU University Hospital, Ludwig-Maximilians-Universität (LMU) Munich, Munich, Germany
| | - Martin K Angele
- Department of General, Visceral and Transplantation Surgery, LMU University Hospital, Ludwig-Maximilians-Universität (LMU) Munich, Munich, Germany
| | - Sylvie Lorenzen
- Clinic for Internal Medicine III, University Hospital rechts der Isar, Technical University of Munich (TUM), Munich, Germany
| | - Michael Quante
- Clinic for Internal Medicine II, Gastrointestinal Oncology, University Medical Center of Freiburg, Freiburg, Germany
- Department of Internal Medicine II, University Hospital rechts der Isar, Technical University of Munich (TUM), Munich, Germany
| | - Wilko Weichert
- Institute of Pathology, Technical University of Munich (TUM), Munich, Germany
- German Cancer Consortium (DKTK), Heidelberg, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
| | - Jens T Siveke
- Bridge Institute of Experimental Tumor Therapy, West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
- Division of Solid Tumor Translational Oncology, German Cancer Consortium (DKTK, Partner site Essen) and German Cancer Research Center (DKFZ), Heidelberg, Germany
- West German Cancer Center, Department of Medical Oncology, University Hospital Essen (AöR), Essen, Germany
- Medical Faculty, University Duisburg-Essen, Essen, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK, Partner site Essen), Heidelberg, Germany
| |
Collapse
|
19
|
Li J, Ellis DG, Kodym O, Rauschenbach L, Rieß C, Sure U, Wrede KH, Alvarez CM, Wodzinski M, Daniol M, Hemmerling D, Mahdi H, Clement A, Kim E, Fishman Z, Whyne CM, Mainprize JG, Hardisty MR, Pathak S, Sindhura C, Gorthi RKSS, Kiran DV, Gorthi S, Yang B, Fang K, Li X, Kroviakov A, Yu L, Jin Y, Pepe A, Gsaxner C, Herout A, Alves V, Španěl M, Aizenberg MR, Kleesiek J, Egger J. Towards clinical applicability and computational efficiency in automatic cranial implant design: An overview of the AutoImplant 2021 cranial implant design challenge. Med Image Anal 2023; 88:102865. [PMID: 37331241 DOI: 10.1016/j.media.2023.102865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 05/23/2023] [Accepted: 06/02/2023] [Indexed: 06/20/2023]
Abstract
Cranial implants are commonly used for surgical repair of craniectomy-induced skull defects. These implants are usually generated offline and may require days to weeks to be available. An automated implant design process combined with onsite manufacturing facilities can guarantee immediate implant availability and avoid secondary intervention. To address this need, the AutoImplant II challenge was organized in conjunction with MICCAI 2021, catering for the unmet clinical and computational requirements of automatic cranial implant design. The first edition of AutoImplant (AutoImplant I, 2020) demonstrated the general capabilities and effectiveness of data-driven approaches, including deep learning, for a skull shape completion task on synthetic defects. The second AutoImplant challenge (i.e., AutoImplant II, 2021) built upon the first by adding real clinical craniectomy cases as well as additional synthetic imaging data. The AutoImplant II challenge consisted of three tracks. Tracks 1 and 3 used skull images with synthetic defects to evaluate the ability of submitted approaches to generate implants that recreate the original skull shape. Track 3 consisted of the data from the first challenge (i.e., 100 cases for training, and 110 for evaluation), and Track 1 provided 570 training and 100 validation cases aimed at evaluating skull shape completion algorithms at diverse defect patterns. Track 2 also made progress over the first challenge by providing 11 clinically defective skulls and evaluating the submitted implant designs on these clinical cases. The submitted designs were evaluated quantitatively against imaging data from post-craniectomy as well as by an experienced neurosurgeon. Submissions to these challenge tasks made substantial progress in addressing issues such as generalizability, computational efficiency, data augmentation, and implant refinement. This paper serves as a comprehensive summary and comparison of the submissions to the AutoImplant II challenge. Codes and models are available at https://github.com/Jianningli/Autoimplant_II.
Collapse
Affiliation(s)
- Jianning Li
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria.
| | - David G Ellis
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Oldřich Kodym
- Graph@FIT, Brno University of Technology, Brno, Czech Republic
| | - Laurèl Rauschenbach
- Department of Neurosurgery and Spine Surgery, University Hospital Essen, Hufelandstrasse 55, 45147 Essen, Germany
| | - Christoph Rieß
- Department of Neurosurgery and Spine Surgery, University Hospital Essen, Hufelandstrasse 55, 45147 Essen, Germany
| | - Ulrich Sure
- Department of Neurosurgery and Spine Surgery, University Hospital Essen, Hufelandstrasse 55, 45147 Essen, Germany
| | - Karsten H Wrede
- Department of Neurosurgery and Spine Surgery, University Hospital Essen, Hufelandstrasse 55, 45147 Essen, Germany
| | - Carlos M Alvarez
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Marek Wodzinski
- AGH University of Science and Technology, Department of Measurement and Electronics, Krakow, Poland; University of Applied Sciences Western Switzerland (HES-SO Valais), Information Systems Institute, Sierre, Switzerland
| | - Mateusz Daniol
- AGH University of Science and Technology, Department of Measurement and Electronics, Krakow, Poland
| | - Daria Hemmerling
- AGH University of Science and Technology, Department of Measurement and Electronics, Krakow, Poland
| | - Hamza Mahdi
- Sunnybrook Research Institute, Toronto, ON, Canada
| | | | - Evan Kim
- Sunnybrook Research Institute, Toronto, ON, Canada
| | | | - Cari M Whyne
- Sunnybrook Research Institute, Toronto, ON, Canada; Division of Orthopaedic Surgery, University of Toronto, Toronto, ON, M5T 1P5, Canada
| | - James G Mainprize
- Sunnybrook Research Institute, Toronto, ON, Canada; Calavera Surgical Design Inc., Toronto, ON, Canada
| | - Michael R Hardisty
- Sunnybrook Research Institute, Toronto, ON, Canada; Division of Orthopaedic Surgery, University of Toronto, Toronto, ON, M5T 1P5, Canada
| | - Shashwat Pathak
- Department of Electrical Engineering, Indian Institute of Technology, Tirupati, India
| | - Chitimireddy Sindhura
- Department of Electrical Engineering, Indian Institute of Technology, Tirupati, India
| | | | - Degala Venkata Kiran
- Department of Mechanical Engineering, Indian Institute of Technology, Tirupati, India
| | - Subrahmanyam Gorthi
- Department of Electrical Engineering, Indian Institute of Technology, Tirupati, India
| | - Bokai Yang
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Ke Fang
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Xingyu Li
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Artem Kroviakov
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria
| | - Lei Yu
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Adam Herout
- Graph@FIT, Brno University of Technology, Brno, Czech Republic
| | - Victor Alves
- ALGORITMI Research Centre/LASI, University of Minho, Braga, Portugal
| | | | - Michele R Aizenberg
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria.
| |
Collapse
|
20
|
Kim M, Seifert R, Fragemann J, Kersting D, Murray J, Jonske F, Pomykala KL, Egger J, Fendler WP, Herrmann K, Kleesiek J. Evaluation of thresholding methods for the quantification of [ 68Ga]Ga-PSMA-11 PET molecular tumor volume and their effect on survival prediction in patients with advanced prostate cancer undergoing [ 177Lu]Lu-PSMA-617 radioligand therapy. Eur J Nucl Med Mol Imaging 2023; 50:2196-2209. [PMID: 36859618 PMCID: PMC10199857 DOI: 10.1007/s00259-023-06163-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 02/19/2023] [Indexed: 03/03/2023]
Abstract
PURPOSE The aim of this study was to systematically evaluate the effect of thresholding algorithms used in computer vision for the quantification of prostate-specific membrane antigen positron emission tomography (PET) derived tumor volume (PSMA-TV) in patients with advanced prostate cancer. The results were validated with respect to the prognostication of overall survival in patients with advanced-stage prostate cancer. MATERIALS AND METHODS A total of 78 patients who underwent [177Lu]Lu-PSMA-617 radionuclide therapy from January 2018 to December 2020 were retrospectively included in this study. [68Ga]Ga-PSMA-11 PET images, acquired prior to radionuclide therapy, were used for the analysis of thresholding algorithms. All PET images were first analyzed semi-automatically using a pre-evaluated, proprietary software solution as the baseline method. Subsequently, five histogram-based thresholding methods and two local adaptive thresholding methods that are well established in computer vision were applied to quantify molecular tumor volume. The resulting whole-body molecular tumor volumes were validated with respect to the prognostication of overall patient survival as well as their statistical correlation to the baseline methods and their performance on standardized phantom scans. RESULTS The whole-body PSMA-TVs, quantified using different thresholding methods, demonstrate a high positive correlation with the baseline methods. We observed the highest correlation with generalized histogram thresholding (GHT) (Pearson r (r), p value (p): r = 0.977, p < 0.001) and Sauvola thresholding (r = 0.974, p < 0.001) and the lowest correlation with Multiotsu (r = 0.877, p < 0.001) and Yen thresholding methods (r = 0.878, p < 0.001). The median survival time of all patients was 9.87 months (95% CI [9.3 to 10.13]). Stratification by median whole-body PSMA-TV resulted in a median survival time from 11.8 to 13.5 months for the patient group with lower tumor burden and 6.5 to 6.6 months for the patient group with higher tumor burden. The patient group with lower tumor burden had significantly higher probability of survival (p < 0.00625) in eight out of nine thresholding methods (Fig. 2); those methods were SUVmax50 (p = 0.0038), SUV ≥3 (p = 0.0034), Multiotsu (p = 0.0015), Yen (p = 0.0015), Niblack (p = 0.001), Sauvola (p = 0.0001), Otsu (p = 0.0053), and Li thresholding (p = 0.0053). CONCLUSION Thresholding methods commonly used in computer vision are promising tools for the semiautomatic quantification of whole-body PSMA-TV in [68Ga]Ga-PSMA-11-PET. The proposed algorithm-driven thresholding strategy is less arbitrary and less prone to biases than thresholding with predefined values, potentially improving the application of whole-body PSMA-TV as an imaging biomarker.
Collapse
Affiliation(s)
- Moon Kim
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Essen, Germany.
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.
| | - Robert Seifert
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany
- German Cancer Consortium (DKTK), University Hospital Essen, Essen, Germany
| | - Jana Fragemann
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Essen, Germany
| | - David Kersting
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany
| | - Jacob Murray
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Essen, Germany
| | - Frederic Jonske
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), Essen, Germany
| | - Kelsey L Pomykala
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Essen, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Essen, Germany
| | - Wolfgang P Fendler
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany
| | - Ken Herrmann
- Department of Nuclear Medicine, University Hospital Essen, Essen, Germany
- German Cancer Consortium (DKTK), University Hospital Essen, Essen, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Essen, Germany
- German Cancer Consortium (DKTK), University Hospital Essen, Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), Essen, Germany
| |
Collapse
|
21
|
Ester O, Hörst F, Seibold C, Keyl J, Ting S, Vasileiadis N, Schmitz J, Ivanyi P, Grünwald V, Bräsen JH, Egger J, Kleesiek J. Valuing vicinity: Memory attention framework for context-based semantic segmentation in histopathology. Comput Med Imaging Graph 2023; 107:102238. [PMID: 37207396 DOI: 10.1016/j.compmedimag.2023.102238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/11/2023] [Accepted: 04/25/2023] [Indexed: 05/21/2023]
Abstract
The segmentation of histopathological whole slide images into tumourous and non-tumourous types of tissue is a challenging task that requires the consideration of both local and global spatial contexts to classify tumourous regions precisely. The identification of subtypes of tumour tissue complicates the issue as the sharpness of separation decreases and the pathologist's reasoning is even more guided by spatial context. However, the identification of detailed tissue types is crucial for providing personalized cancer therapies. Due to the high resolution of whole slide images, existing semantic segmentation methods, restricted to isolated image sections, are incapable of processing context information beyond. To take a step towards better context comprehension, we propose a patch neighbour attention mechanism to query the neighbouring tissue context from a patch embedding memory bank and infuse context embeddings into bottleneck hidden feature maps. Our memory attention framework (MAF) mimics a pathologist's annotation procedure - zooming out and considering surrounding tissue context. The framework can be integrated into any encoder-decoder segmentation method. We evaluate the MAF on two public breast cancer and liver cancer data sets and an internal kidney cancer data set using famous segmentation models (U-Net, DeeplabV3) and demonstrate the superiority over other context-integrating algorithms - achieving a substantial improvement of up to 17% on Dice score. The code is publicly available at https://github.com/tio-ikim/valuing-vicinity.
Collapse
Affiliation(s)
- Oliver Ester
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
| | - Fabian Hörst
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany.
| | - Constantin Seibold
- Institute of Anthropomatics and Robotics, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
| | - Julius Keyl
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany; Institute of Pathology, University Hospital Essen (AöR), University of Duisburg-Essen, Essen, Germany
| | - Saskia Ting
- Institute of Pathology, University Hospital Essen (AöR), University of Duisburg-Essen, Essen, Germany; Institute of Pathology Nordhessen, Kassel, Germany
| | - Nikolaos Vasileiadis
- Nephropathology Unit, Institute for Pathology, Hannover Medical School, Hannover, Germany
| | - Jessica Schmitz
- Nephropathology Unit, Institute for Pathology, Hannover Medical School, Hannover, Germany
| | - Philipp Ivanyi
- Department of Hematology, Hemostasis, Oncology and Stem Cell Transplantation, Hannover Medical School, Hannover, Germany
| | - Viktor Grünwald
- Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany; Clinic for Medical Oncology, Clinic for Urology, West German Cancer Center, University Hospital Essen (AöR), Essen, Germany
| | - Jan Hinrich Bräsen
- Nephropathology Unit, Institute for Pathology, Hannover Medical School, Hannover, Germany
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany; German Cancer Consortium (DKTK), Partner Site Essen, Germany
| |
Collapse
|
22
|
Kleesiek J, Wu Y, Stiglic G, Egger J, Bian J. An Opinion on ChatGPT in Health Care-Written by Humans Only. J Nucl Med 2023; 64:701-703. [PMID: 37055219 DOI: 10.2967/jnumed.123.265687] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 03/14/2023] [Indexed: 04/15/2023] Open
Affiliation(s)
- Jens Kleesiek
- Institute for AI in Medicine, University Medicine Essen, Essen, Germany;
| | - Yonghui Wu
- Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, Florida; and
| | - Gregor Stiglic
- Faculty of Health Sciences, University of Maribor, Maribor, Slovenia
| | - Jan Egger
- Institute for AI in Medicine, University Medicine Essen, Essen, Germany
| | - Jiang Bian
- Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, Florida; and
| |
Collapse
|
23
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
24
|
Gonçalves M, Gsaxner C, Ferreira A, Li J, Puladi B, Kleesiek J, Egger J, Alves V. Radiomics in Head and Neck Cancer Outcome Predictions. Diagnostics (Basel) 2022; 12:2733. [PMID: 36359576 PMCID: PMC9689406 DOI: 10.3390/diagnostics12112733] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 11/02/2022] [Accepted: 11/04/2022] [Indexed: 09/16/2023] Open
Abstract
Head and neck cancer has great regional anatomical complexity, as it can develop in different structures, exhibiting diverse tumour manifestations and high intratumoural heterogeneity, which is highly related to resistance to treatment, progression, the appearance of metastases, and tumour recurrences. Radiomics has the potential to address these obstacles by extracting quantitative, measurable, and extractable features from the region of interest in medical images. Medical imaging is a common source of information in clinical practice, presenting a potential alternative to biopsy, as it allows the extraction of a large number of features that, although not visible to the naked eye, may be relevant for tumour characterisation. Taking advantage of machine learning techniques, the set of features extracted when associated with biological parameters can be used for diagnosis, prognosis, and predictive accuracy valuable for clinical decision-making. Therefore, the main goal of this contribution was to determine to what extent the features extracted from Computed Tomography (CT) are related to cancer prognosis, namely Locoregional Recurrences (LRs), the development of Distant Metastases (DMs), and Overall Survival (OS). Through the set of tumour characteristics, predictive models were developed using machine learning techniques. The tumour was described by radiomic features, extracted from images, and by the clinical data of the patient. The performance of the models demonstrated that the most successful algorithm was XGBoost, and the inclusion of the patients' clinical data was an asset for cancer prognosis. Under these conditions, models were created that can reliably predict the LR, DM, and OS status, with the area under the ROC curve (AUC) values equal to 0.74, 0.84, and 0.91, respectively. In summary, the promising results obtained show the potential of radiomics, once the considered cancer prognosis can, in fact, be expressed through CT scans.
Collapse
Affiliation(s)
- Maria Gonçalves
- Center Algoritmi, LASI, University of Minho, 4710-057 Braga, Portugal
- Computer Algorithms for Medicine Laboratory, 8010 Graz, Austria
| | - Christina Gsaxner
- Computer Algorithms for Medicine Laboratory, 8010 Graz, Austria
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria
| | - André Ferreira
- Center Algoritmi, LASI, University of Minho, 4710-057 Braga, Portugal
- Computer Algorithms for Medicine Laboratory, 8010 Graz, Austria
- Institute for AI in Medicine (IKIM), University Medicine Essen (AöR), Girardetstraße 2, 45131 Essen, Germany
| | - Jianning Li
- Computer Algorithms for Medicine Laboratory, 8010 Graz, Austria
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria
- Institute for AI in Medicine (IKIM), University Medicine Essen (AöR), Girardetstraße 2, 45131 Essen, Germany
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen (AöR), Girardetstraße 2, 45131 Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Hufelandstraße 55, 45147 Essen, Germany
- German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, 45147 Essen, Germany
| | - Jan Egger
- Computer Algorithms for Medicine Laboratory, 8010 Graz, Austria
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria
- Institute for AI in Medicine (IKIM), University Medicine Essen (AöR), Girardetstraße 2, 45131 Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Hufelandstraße 55, 45147 Essen, Germany
| | - Victor Alves
- Center Algoritmi, LASI, University of Minho, 4710-057 Braga, Portugal
| |
Collapse
|
25
|
Xu J, Zeng B, Egger J, Wang C, Smedby Ö, Jiang X, Chen X. A review on AI-based medical image computing in head and neck surgery. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac840f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/25/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Head and neck surgery is a fine surgical procedure with a complex anatomical space, difficult operation and high risk. Medical image computing (MIC) that enables accurate and reliable preoperative planning is often needed to reduce the operational difficulty of surgery and to improve patient survival. At present, artificial intelligence, especially deep learning, has become an intense focus of research in MIC. In this study, the application of deep learning-based MIC in head and neck surgery is reviewed. Relevant literature was retrieved on the Web of Science database from January 2015 to May 2022, and some papers were selected for review from mainstream journals and conferences, such as IEEE Transactions on Medical Imaging, Medical Image Analysis, Physics in Medicine and Biology, Medical Physics, MICCAI, etc. Among them, 65 references are on automatic segmentation, 15 references on automatic landmark detection, and eight references on automatic registration. In the elaboration of the review, first, an overview of deep learning in MIC is presented. Then, the application of deep learning methods is systematically summarized according to the clinical needs, and generalized into segmentation, landmark detection and registration of head and neck medical images. In segmentation, it is mainly focused on the automatic segmentation of high-risk organs, head and neck tumors, skull structure and teeth, including the analysis of their advantages, differences and shortcomings. In landmark detection, the focus is mainly on the introduction of landmark detection in cephalometric and craniomaxillofacial images, and the analysis of their advantages and disadvantages. In registration, deep learning networks for multimodal image registration of the head and neck are presented. Finally, their shortcomings and future development directions are systematically discussed. The study aims to serve as a reference and guidance for researchers, engineers or doctors engaged in medical image analysis of head and neck surgery.
Collapse
|
26
|
Jonske F, Dederichs M, Kim MS, Keyl J, Egger J, Umutlu L, Forsting M, Nensa F, Kleesiek J. Deep Learning-driven classification of external DICOM studies for PACS archiving. Eur Radiol 2022; 32:8769-8776. [PMID: 35788757 DOI: 10.1007/s00330-022-08926-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 05/02/2022] [Accepted: 05/19/2022] [Indexed: 12/01/2022]
Abstract
OBJECTIVES Over the course of their treatment, patients often switch hospitals, requiring staff at the new hospital to import external imaging studies to their local database. In this study, the authors present MOdality Mapping and Orchestration (MOMO), a Deep Learning-based approach to automate this mapping process by combining metadata analysis and a neural network ensemble. METHODS A set of 11,934 imaging series with existing anatomical labels was retrieved from the PACS database of the local hospital to train an ensemble of neural networks (DenseNet-161 and ResNet-152), which process radiological images and predict the type of study they belong to. We developed an algorithm that automatically extracts relevant metadata from imaging studies, regardless of their structure, and combines it with the neural network ensemble, forming a powerful classifier. A set of 843 anonymized external studies from 321 hospitals was hand-labeled to assess performance. We tested several variations of this algorithm. RESULTS MOMO achieves 92.71% accuracy and 2.63% minor errors (at 99.29% predictive power) on the external study classification task, outperforming both a commercial product (82.86% accuracy, 1.36% minor errors, 96.20% predictive power) and a pure neural network ensemble (72.69% accuracy, 10.3% minor errors, 99.05% predictive power) performing the same task. We find that the highest performance is achieved by an algorithm that combines all information into one vote-based classifier. CONCLUSION Deep Learning combined with metadata matching is a promising and flexible approach for the automated classification of external DICOM studies for PACS archiving. KEY POINTS • The algorithm can successfully identify 76 medical study types across seven modalities (CT, X-ray angiography, radiographs, MRI, PET (+CT/MRI), ultrasound, and mammograms). • The algorithm outperforms a commercial product performing the same task by a significant margin (> 9% accuracy gain). • The performance of the algorithm increases through the application of Deep Learning techniques.
Collapse
Affiliation(s)
- Frederic Jonske
- Institute of AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany. .,Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Essen, Germany.
| | - Maximilian Dederichs
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Moon-Sung Kim
- Institute of AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany.,Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Essen, Germany.,Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Julius Keyl
- Institute of AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany.,Department of Tumor Research, University Hospital Essen, Essen, Germany
| | - Jan Egger
- Institute of AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany.,Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Essen, Germany
| | - Lale Umutlu
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.,German Cancer Consortium (DKTK), Partner Site Essen, Essen, Germany
| | - Michael Forsting
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany.,German Cancer Consortium (DKTK), Partner Site Essen, Essen, Germany
| | - Felix Nensa
- Institute of AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany.,Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Jens Kleesiek
- Institute of AI in Medicine (IKIM), University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany.,Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Essen, Germany.,German Cancer Consortium (DKTK), Partner Site Essen, Essen, Germany.,University Duisburg-Essen, Essen, Germany
| |
Collapse
|
27
|
Sulakhe H, Li J, Egger J, Goyal P. CranGAN: Adversarial Point Cloud Reconstruction for patient-specific Cranial Implant Design. Annu Int Conf IEEE Eng Med Biol Soc 2022; 2022:603-608. [PMID: 36085744 DOI: 10.1109/embc48229.2022.9871069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Automatizing cranial implant design has become an increasingly important avenue in biomedical research. Benefits in terms of financial resources, time and patient safety necessitate the formulation of an efficient and accurate procedure for the same. This paper attempts to provide a new research direction to this problem, through an adversarial deep learning solution. Specifically, in this work, we present CranGAN - a 3D Conditional Generative Adversarial Network designed to reconstruct a 3D representation of a complete skull given its defective counterpart. A novel solution of employing point cloud representations instead of conventional 3D meshes and voxel grids is proposed. We provide both qualitative and quantitative analysis of our experiments with three separate GAN objectives, and compare the utility of two 3D reconstruction loss functions viz. Hausdorff Distance and Chamfer Distance. We hope that our work inspires further research in this direction. Clinical relevance- This paper establishes a new research direction to assist in automated implant design for cranioplasty.
Collapse
|
28
|
Egger J, Gsaxner C, Pepe A, Pomykala KL, Jonske F, Kurz M, Li J, Kleesiek J. Medical deep learning-A systematic meta-review. Comput Methods Programs Biomed 2022; 221:106874. [PMID: 35588660 DOI: 10.1016/j.cmpb.2022.106874] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 04/22/2022] [Accepted: 05/10/2022] [Indexed: 05/22/2023]
Abstract
Deep learning has remarkably impacted several different scientific disciplines over the last few years. For example, in image processing and analysis, deep learning algorithms were able to outperform other cutting-edge methods. Additionally, deep learning has delivered state-of-the-art results in tasks like autonomous driving, outclassing previous attempts. There are even instances where deep learning outperformed humans, for example with object recognition and gaming. Deep learning is also showing vast potential in the medical domain. With the collection of large quantities of patient records and data, and a trend towards personalized treatments, there is a great need for automated and reliable processing and analysis of health information. Patient data is not only collected in clinical centers, like hospitals and private practices, but also by mobile healthcare apps or online websites. The abundance of collected patient data and the recent growth in the deep learning field has resulted in a large increase in research efforts. In Q2/2020, the search engine PubMed returned already over 11,000 results for the search term 'deep learning', and around 90% of these publications are from the last three years. However, even though PubMed represents the largest search engine in the medical field, it does not cover all medical-related publications. Hence, a complete overview of the field of 'medical deep learning' is almost impossible to obtain and acquiring a full overview of medical sub-fields is becoming increasingly more difficult. Nevertheless, several review and survey articles about medical deep learning have been published within the last few years. They focus, in general, on specific medical scenarios, like the analysis of medical images containing specific pathologies. With these surveys as a foundation, the aim of this article is to provide the first high-level, systematic meta-review of medical deep learning surveys.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Department of Oral &Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147 Essen, Germany.
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Department of Oral &Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Frederic Jonske
- Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Manuel Kurz
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147 Essen, Germany; German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, 45147 Essen, Germany
| |
Collapse
|
29
|
Radl L, Jin Y, Pepe A, Li J, Gsaxner C, Zhao FH, Egger J. AVT: Multicenter aortic vessel tree CTA dataset collection with ground truth segmentation masks. Data Brief 2022; 40:107801. [PMID: 35059483 PMCID: PMC8760499 DOI: 10.1016/j.dib.2022.107801] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 12/21/2021] [Accepted: 01/04/2022] [Indexed: 11/24/2022] Open
Abstract
In this article, we present a multicenter aortic vessel tree database collection, containing 56 aortas and their branches. The datasets have been acquired with computed tomography angiography (CTA) scans and each scan covers the ascending aorta, the aortic arch and its branches into the head/neck area, the thoracic aorta, the abdominal aorta and the lower abdominal aorta with the iliac arteries branching into the legs. For each scan, the collection provides a semi-automatically generated segmentation mask of the aortic vessel tree (ground truth). The scans come from three different collections and various hospitals, having various resolutions, which enables studying the geometry/shape variabilities of human aortas and its branches from different geographic locations. Furthermore, creating a robust statistical model of the shape of human aortic vessel trees, which can be used for various tasks such as the development of fully-automatic segmentation algorithms for new, unseen aortic vessel tree cases, e.g. by training deep learning-based approaches. Hence, the collection can serve as an evaluation set for automatic aortic vessel tree segmentation algorithms.
Collapse
Affiliation(s)
- Lukas Radl
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
| | - Yuan Jin
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Research Center for Connected Healthcare Big Data, ZhejiangLab, Hangzhou, Zhejiang, 311121 China
| | - Antonio Pepe
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
| | - Jianning Li
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
- Institute for AI in Medicine (IKIM), University Hospital Essen (UKE), Ruhrgebiet, Essen, Germany
| | - Christina Gsaxner
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
| | - Fen-hua Zhao
- Department of Radiology, Affiliated Dongyang Hospital of Wenzhou Medical University, Dongyang, Zhejiang, 322100 China
| | - Jan Egger
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
- Institute for AI in Medicine (IKIM), University Hospital Essen (UKE), Ruhrgebiet, Essen, Germany
| |
Collapse
|
30
|
Egger J, Wild D, Weber M, Bedoya CAR, Karner F, Prutsch A, Schmied M, Dionysio C, Krobath D, Jin Y, Gsaxner C, Li J, Pepe A. Studierfenster: an Open Science Cloud-Based Medical Imaging Analysis Platform. J Digit Imaging 2022; 35:340-355. [PMID: 35064372 PMCID: PMC8782222 DOI: 10.1007/s10278-021-00574-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 12/14/2021] [Accepted: 12/16/2021] [Indexed: 02/06/2023] Open
Abstract
Imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) are widely used in diagnostics, clinical studies, and treatment planning. Automatic algorithms for image analysis have thus become an invaluable tool in medicine. Examples of this are two- and three-dimensional visualizations, image segmentation, and the registration of all anatomical structure and pathology types. In this context, we introduce Studierfenster (www.studierfenster.at): a free, non-commercial open science client-server framework for (bio-)medical image analysis. Studierfenster offers a wide range of capabilities, including the visualization of medical data (CT, MRI, etc.) in two-dimensional (2D) and three-dimensional (3D) space in common web browsers, such as Google Chrome, Mozilla Firefox, Safari, or Microsoft Edge. Other functionalities are the calculation of medical metrics (dice score and Hausdorff distance), manual slice-by-slice outlining of structures in medical images, manual placing of (anatomical) landmarks in medical imaging data, visualization of medical data in virtual reality (VR), and a facial reconstruction and registration of medical data for augmented reality (AR). More sophisticated features include the automatic cranial implant design with a convolutional neural network (CNN), the inpainting of aortic dissections with a generative adversarial network, and a CNN for automatic aortic landmark detection in CT angiography images. A user study with medical and non-medical experts in medical image analysis was performed, to evaluate the usability and the manual functionalities of Studierfenster. When participants were asked about their overall impression of Studierfenster in an ISO standard (ISO-Norm) questionnaire, a mean of 6.3 out of 7.0 possible points were achieved. The evaluation also provided insights into the results achievable with Studierfenster in practice, by comparing these with two ground truth segmentations performed by a physician of the Medical University of Graz in Austria. In this contribution, we presented an online environment for (bio-)medical image analysis. In doing so, we established a client-server-based architecture, which is able to process medical data, especially 3D volumes. Our online environment is not limited to medical applications for humans. Rather, its underlying concept could be interesting for researchers from other fields, in applying the already existing functionalities or future additional implementations of further image processing applications. An example could be the processing of medical acquisitions like CT or MRI from animals [Clinical Pharmacology & Therapeutics, 84(4):448–456, 68], which get more and more common, as veterinary clinics and centers get more and more equipped with such imaging devices. Furthermore, applications in entirely non-medical research in which images/volumes need to be processed are also thinkable, such as those in optical measuring techniques, astronomy, or archaeology.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia.
- Computer Algorithms for Medicine Laboratory, Graz, Austria.
- Institute for Artificial Intelligence in Medicine, AI-guided Therapies, University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany.
| | - Daniel Wild
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Maximilian Weber
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christopher A Ramirez Bedoya
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Florian Karner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Alexander Prutsch
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Michael Schmied
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christina Dionysio
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Dominik Krobath
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Research Center for Connected Healthcare Big Data, ZhejiangLab, 311121, Hangzhou, Zhejiang, China
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Institute for Artificial Intelligence in Medicine, AI-guided Therapies, University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| |
Collapse
|
31
|
Li J, Krall M, Trummer F, Memon AR, Pepe A, Gsaxner C, Jin Y, Chen X, Deutschmann H, Zefferer U, Schäfer U, Campe GV, Egger J. MUG500+: Database of 500 high-resolution healthy human skulls and 29 craniotomy skulls and implants. Data Brief 2021; 39:107524. [PMID: 34815988 PMCID: PMC8591340 DOI: 10.1016/j.dib.2021.107524] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Accepted: 10/25/2021] [Indexed: 11/19/2022] Open
Abstract
In this article, we present a skull database containing 500 healthy skulls segmented from high-resolution head computed-tomography (CT) scans and 29 defective skulls segmented from craniotomy head CTs. Each healthy skull contains the complete anatomical structures of human skulls, including the cranial bones, facial bones and other subtle structures. For each craniotomy skull, a part of the cranial bone is missing, leaving a defect on the skull. The defects have various sizes, shapes and positions, depending on the specific pathological conditions of each patient. Along with each craniotomy skull, a cranial implant, which is designed manually by an expert and can fit with the defect, is provided. Considering the large volume of the healthy skull collection, the dataset can be used to study the geometry/shape variabilities of human skulls and create a robust statistical model of the shape of human skulls, which can be used for various tasks such as cranial implant design. The craniotomy collection can serve as an evaluation set for automatic cranial implant design algorithms.
Collapse
Affiliation(s)
- Jianning Li
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
- Corresponding authors.
| | - Marcell Krall
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
| | - Florian Trummer
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
| | - Afaque Rafique Memon
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Antonio Pepe
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
| | - Christina Gsaxner
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
| | - Yuan Jin
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Research Center for Connected Healthcare Big Data, ZhejiangLab, Hangzhou, Zhejiang, China
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | | | - Ulrike Zefferer
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
| | - Ute Schäfer
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
| | - Gord von Campe
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
| | - Jan Egger
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
- Corresponding authors.
| |
Collapse
|
32
|
Egger J, Pepe A, Gsaxner C, Jin Y, Li J, Kern R. Deep learning-a first meta-survey of selected reviews across scientific disciplines, their commonalities, challenges and research impact. PeerJ Comput Sci 2021; 7:e773. [PMID: 34901429 PMCID: PMC8627237 DOI: 10.7717/peerj-cs.773] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 10/15/2021] [Indexed: 05/07/2023]
Abstract
Deep learning belongs to the field of artificial intelligence, where machines perform tasks that typically require some kind of human intelligence. Deep learning tries to achieve this by drawing inspiration from the learning of a human brain. Similar to the basic structure of a brain, which consists of (billions of) neurons and connections between them, a deep learning algorithm consists of an artificial neural network, which resembles the biological brain structure. Mimicking the learning process of humans with their senses, deep learning networks are fed with (sensory) data, like texts, images, videos or sounds. These networks outperform the state-of-the-art methods in different tasks and, because of this, the whole field saw an exponential growth during the last years. This growth resulted in way over 10,000 publications per year in the last years. For example, the search engine PubMed alone, which covers only a sub-set of all publications in the medical field, provides already over 11,000 results in Q3 2020 for the search term 'deep learning', and around 90% of these results are from the last three years. Consequently, a complete overview over the field of deep learning is already impossible to obtain and, in the near future, it will potentially become difficult to obtain an overview over a subfield. However, there are several review articles about deep learning, which are focused on specific scientific fields or applications, for example deep learning advances in computer vision or in specific tasks like object detection. With these surveys as a foundation, the aim of this contribution is to provide a first high-level, categorized meta-survey of selected reviews on deep learning across different scientific disciplines and outline the research impact that they already have during a short period of time. The categories (computer vision, language processing, medical informatics and additional works) have been chosen according to the underlying data sources (image, language, medical, mixed). In addition, we review the common architectures, methods, pros, cons, evaluations, challenges and future directions for every sub-category.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Graz, Austria
- Institute for AI in Medicine (IKIM), University Medicine Essen, Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, Zhejiang, China
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Institute for AI in Medicine (IKIM), University Medicine Essen, Essen, Germany
- Research Unit Experimental Neurotraumatology, Department of Neurosurgery, Medical University of Graz, Graz, Austria
| | - Roman Kern
- Knowledge Discovery, Know-Center, Graz, Austria
- Institute of Interactive Systems and Data Science, Graz University of Technology, Graz, Austria
| |
Collapse
|
33
|
Troppmair T, Egger J, Krösbacher A, Zanvettor A, Schinnerl A, Neumayr A, Baubin M. [Evaluation of cancelled emergency physician missions and patient handovers in the area of Innsbruck : Retrospective assessment of physician-staffed emergency medical service cancellations and handovers from the emergency physician to the emergency medical service in 2017 and 2018]. Anaesthesist 2021; 71:272-280. [PMID: 34643756 PMCID: PMC8986753 DOI: 10.1007/s00101-021-01046-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 08/24/2021] [Accepted: 09/06/2021] [Indexed: 10/25/2022]
Abstract
BACKGROUND Human and vehicle resource management indicates a good emergency medical system (EMS). Frequently, an emergency medical technician (EMT) is the first responder to the emergency, which negates the necessity for an emergency physician (EP) and is just as sensible as handing over a stable patient to the EMT for transport to the hospital. The Austrian EMS is utilized by EMTs, in cases of potential life-threatening emergencies the dispatch center dispatches an additional team with an on-board EP. During the years 2017-2018 nearly every fifth EP mission in Innsbruck (including surrounding areas) ended in a cancellation. The numbers of patient handovers from EP to EMT are slightly lower with mission cancellations resulting in every fourth patient. Therefore, due to the high number of cancellations and handovers evaluated in this study, the findings suggest that there is a potential need to re-evaluate procedures. The re-evaluation of these procedures could determine whether these cancellations/handovers were justified or if an over hasty decision making was at fault. All cases considered in this study were from the Innsbruck and Telfs EP bases between 1 January 2017 and 13 December 2018. METHODS Out of a total of 96,908 emergency dispatches, there were 2470 cancellation/handover occurrences. These occurrences consisted of 1190 cancellations and 1280 patient handovers from the EP to the EMT. Patients who were transferred to the University Hospital Innsbruck were included in these figures. The protocols of the emergency dispatches have been filtered from the so-called CarPC. They have subsequently been grouped into cancellation and handover categories. The clinical diagnoses of the patients with inpatient treatment were evaluated from the hospital information system (KIS) of the University Hospital Innsbruck. This was done with the help of the so-called emergency physician indications catalogue of the German Medical Council. The diagnosis was documented in the hospital information system. The emergency protocols from the EMTs were also evaluated retrospectively. The Innsbruck based EP patients are hospitalized in the Innsbruck Hospital due their geographical position. When there is no need for a specific intervention the patients of the EPs based in Telfs are transferred to a local hospital. When a specific intervention is necessary, patient care must be provided by the University Hospital Innsbruck. Due to the privacy practices of the Innsbruck Medical University "vote of ethics" only the data of patients transferred to the Innsbruck Clinic can be evaluated. The information provided from the EPs based in Innsbruck was exclusively from the University Hospital Innsbruck's anesthesiologists. The physicians from the Telfs EP base are of mixed medical specialities. All of them, however, have an emergency medical physician diploma, in addition to the ius practicandi. Lastly, there are no EPs in Innsbruck or Telfs, who have any special obligations during their duty. RESULTS The results show that in 210 cases (8.5%) the indications for the EP, based on the emergency physician indications catalogue of the German Medical Council were given. Also, 8.7% of all cancellations and 8.4% of patient handovers were not justified. Patients with emergency indications had a longer hospitalization. The EP base EMS Innsbruck had more cancellations than the EP base EMS Telfs. The EMS Innsbruck also had more cancellations than patient handovers. Conversely, the EMS Telfs had more patient handovers than cancellations. On the weekends between 6:00 pm and 6:00 am there were less cancellations and handovers from both EP bases. The documentation from the EMT protocols was incomplete in 284 cancellations (23.9% of the cancellations) and 339 handovers (26.5% of the handovers), 35 patients after cancellations (2.9%), 35 patients after handovers (2.7%) needed intensive care treatment, 20 patients after cancellations (1.7% of all cancellations), and 24 patients after handovers (1.9% of all handovers) who needed intensive care treatment had a critical diagnosis. In 40 cases of patient handovers, the EP was alerted to another emergency follow-up within 10 min. CONCLUSION In Austria, the introduction of a standardized emergency indication checklist might help dispatch centers to provide a more accurate dispatch as well as all EMS team members. Furthermore, a better traceability system (according to EP cancellations and patient handovers from the EP to the EMT) could be achieved. The documentation requirements should be more precise by all members of the EMT staff, not only for the legal aspects but also for improving the overall management quality. Intense education and training as well as diagnosis feedbacks could help to reduce the number of risky cancellations/patient handovers.
Collapse
Affiliation(s)
- Teresa Troppmair
- Universitätsklinik für Anästhesie und Intensivmedizin, Anichstraße 35, 6020, Innsbruck, Österreich.
| | | | | | | | | | | | | |
Collapse
|
34
|
Quintanal-Villalonga A, Taniguchi H, Zhan Y, Hasan M, Chavan S, Uddin F, Allaj V, Manoj P, Shah N, Chan J, Chow A, Offin M, Bhanot U, Egger J, Qiu J, De Stanchina E, Chang J, Rekhtman N, Houck-Loomis B, Koche R, Yu H, Sen T, Rudin C. MA11.06 Multi-Omic Characterization of Lung Tumors Implicates AKT and MYC Signaling in Adenocarcinoma to Squamous Cell Transdifferentiation. J Thorac Oncol 2021. [DOI: 10.1016/j.jtho.2021.08.167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
35
|
Quintanal-Villalonga A, Taniguchi H, Hao Y, Chow A, Zhan Y, Chavan S, Uddin F, Allaj V, Manoj P, Shah N, Chan J, Offin M, Egger J, Bhanot U, Qiu J, De Stanchina E, Sen T, Poirier J, Rudin C. MA16.03 CRISPR Screen Reveals XPO1 as a Therapeutic Target Strongly Sensitizing to First and Second Line Therapy in Small Cell Lung Cancer. J Thorac Oncol 2021. [DOI: 10.1016/j.jtho.2021.08.196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
36
|
Quintanal-Villalonga Á, Taniguchi H, Hao Y, Chow A, Zhan Y, Uddin F, Allaj V, Manoj P, Shah N, Chan J, Offin M, Ciampricotti M, Egger J, Qiu J, De Stanchina E, Hollmann T, Koche R, Sen T, Poirier J, Rudin C. 2MO XPO1 inhibition strongly sensitizes to first-line and second-line therapy in small cell lung cancer. Ann Oncol 2021. [DOI: 10.1016/j.annonc.2021.08.1998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
37
|
Quintanal-Villalonga Á, Taniguchi H, Zhan Y, Hasan M, Chavan S, Uddin F, Allaj V, Manoj P, Shah N, Ciampricotti M, Bhanot U, Egger J, Qiu J, De Stanchina E, Rekhtman N, Houck-Loomis B, Koche R, Yu H, Sen T, Rudin C. 1MO Multi-omic characterization of lung tumors identify AKT and EZH2 as potential therapeutic targets in adenocarcinoma-to-squamous transdifferentiation. Ann Oncol 2021. [DOI: 10.1016/j.annonc.2021.08.1997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
|
38
|
Chan J, Quintanal-Villalonga A, Gao V, Xie Y, Allaj V, Chaudhary O, Masilionis I, Egger J, Chow A, Walle T, Ciampricotti M, Offin M, Lai V, Bott M, Jones D, Hollmann T, Nawy T, Mazutis L, Sen T, Pe'Er D, Rudin C. OA07.01 Signatures of Plasticity and Immunosuppression in a Single-Cell Atlas of Human Small Cell Lung Cancer. J Thorac Oncol 2021. [DOI: 10.1016/j.jtho.2021.08.054] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
39
|
Quintanal-Villalonga Á, Taniguchi H, Zhan Y, Hasan M, Chavan S, Uddin F, Allaj V, Manoj P, Shah N, Chan J, Ciampricotti M, Chow A, Bhanot U, Egger J, Qiu J, De Stanchina E, Rekhtman N, Yu H, Sen T, Rudin C. 1800O Multi-omic characterization of lung tumors implicates AKT and MYC signaling in adenocarcinoma to squamous cell transdifferentiation. Ann Oncol 2021. [DOI: 10.1016/j.annonc.2021.08.254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
|
40
|
Li J, Pimentel P, Szengel A, Ehlke M, Lamecker H, Zachow S, Estacio L, Doenitz C, Ramm H, Shi H, Chen X, Matzkin F, Newcombe V, Ferrante E, Jin Y, Ellis DG, Aizenberg MR, Kodym O, Spanel M, Herout A, Mainprize JG, Fishman Z, Hardisty MR, Bayat A, Shit S, Wang B, Liu Z, Eder M, Pepe A, Gsaxner C, Alves V, Zefferer U, von Campe G, Pistracher K, Schafer U, Schmalstieg D, Menze BH, Glocker B, Egger J. AutoImplant 2020-First MICCAI Challenge on Automatic Cranial Implant Design. IEEE Trans Med Imaging 2021; 40:2329-2342. [PMID: 33939608 DOI: 10.1109/tmi.2021.3077047] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
The aim of this paper is to provide a comprehensive overview of the MICCAI 2020 AutoImplant Challenge. The approaches and publications submitted and accepted within the challenge will be summarized and reported, highlighting common algorithmic trends and algorithmic diversity. Furthermore, the evaluation results will be presented, compared and discussed in regard to the challenge aim: seeking for low cost, fast and fully automated solutions for cranial implant design. Based on feedback from collaborating neurosurgeons, this paper concludes by stating open issues and post-challenge requirements for intra-operative use. The codes can be found at https://github.com/Jianningli/tmi.
Collapse
|
41
|
Memon AR, Li J, Egger J, Chen X. A review on patient-specific facial and cranial implant design using Artificial Intelligence (AI) techniques. Expert Rev Med Devices 2021; 18:985-994. [PMID: 34404280 DOI: 10.1080/17434440.2021.1969914] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
INTRODUCTION Researchers and engineers have found their importance in healthcare industry including recent updates in patient-specific implant (PSI) design. CAD/CAM technology plays an important role in the design and development of Artificial Intelligence (AI) based implants.The across the globe have their interest focused on the design and manufacturing of AI-based implants in everyday professional use can decrease the cost, improve patient's health and increase efficiency, and thus many implant designers and manufacturers practice. AREAS COVERED The focus of this study has been to manufacture smart devices that can make contact with the world as normal people do, understand their language, and learn to improve from real-life examples. Machine learning can be guided using a heavy amount of data sets and algorithms that can improve its ability to learn to perform the task. In this review, artificial intelligence (AI), deep learning, and machine-learning techniques are studied in the design of biomedical implants. EXPERT OPINION The main purpose of this article was to highlight important AI techniques to design PSIs. These are the automatic techniques to help designers to design patient-specific implants using AI algorithms such as deep learning, machine learning, and some other automatic methods.
Collapse
Affiliation(s)
- Afaque Rafique Memon
- State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Institute of Bio-medical Manufacturing and Life Quality Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jianning Li
- Faculty of Computer Science and Biomedical Engineering, Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria.,The Laboratory of Computer Algorithm for Medicine, Medical University of Graz, Graz, Austria.,Department of Neurosurgery, Medical University of Graz, Graz, Austria
| | - Jan Egger
- Faculty of Computer Science and Biomedical Engineering, Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria.,The Laboratory of Computer Algorithm for Medicine, Medical University of Graz, Graz, Austria.,Department of Neurosurgery, Medical University of Graz, Graz, Austria.,Department of Oral and Maxillofacial Surgery, Medical University of Graz, Graz, Austria
| | - Xiaojun Chen
- State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Institute of Bio-medical Manufacturing and Life Quality Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
42
|
Wallner J, Schwaiger M, Edmondson SJ, Mischak I, Egger J, Feichtinger M, Zemann W, Pau M. Effects of Pre-Operative Risk Factors on Intensive Care Unit Length of Stay (ICU-LOS) in Major Oral and Maxillofacial Cancer Surgery. Cancers (Basel) 2021; 13:cancers13163937. [PMID: 34439092 PMCID: PMC8394988 DOI: 10.3390/cancers13163937] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 07/27/2021] [Accepted: 08/02/2021] [Indexed: 11/16/2022] Open
Abstract
OBJECTIVE This study aimed to investigate the effect of certain pre-operative parameters directly on the post-operative intensive care unit (ICU)-length of stay (LOS), in order to identify at-risk patients that are expected to need prolonged intensive care management post-operatively. MATERIAL AND METHODS Retrospectively, patients managed in an ICU after undergoing major oral and maxillofacial surgery were analyzed. Inclusion criteria entailed: age 18-90 years, major primary oral cancer surgery including tumor resection, neck dissection and microvascular free flap reconstruction, minimum operation time of 8 h. Exclusion criteria were: benign/borderline tumors, primary radiation, other defect reconstruction than microvascular, treatment at other centers. Separate parameters used within the clinical routine were set in correlation with ICU-LOS, by applying single testing calculations (t-tests, variance analysis, correlation coefficients, effect sizes) and a valid univariate linear regression model. The primary outcome of interest was ICU-LOS. RESULTS This study included a homogenous cohort of 122 patients. Mean surgery time was 11.4 (±2.2) h, mean ICU-LOS was 3.6 (±2.6) days. Patients with pre-operative renal dysfunction (p < 0.001), peripheral vascular disease-PVD (p = 0.01), increasing heart failure-NYHA stage categories (p = 0.009) and higher-grade categories of post-operative complications (p = 0.023) were identified as at-risk patients for a significantly prolonged post-operative ICU-LOS. CONCLUSIONS At-risk patients are prone to need a significantly longer ICU-LOS than others. These patients are those with pre-operative severe renal dysfunction, PVD and/or high NYHA stage categories. Confounding parameters that contribute to a prolonged ICU-LOS in combination with other variables were identified as higher age, prolonged operative time, chronic obstructive pulmonary disease, and intra-operatively transfused blood.
Collapse
Affiliation(s)
- Juergen Wallner
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, 8036 Graz, Austria; (J.W.); (J.E.); (M.F.); (W.Z.); (M.P.)
- Department of Cranio-Maxillofacial Surgery, AZ Monica and the University Hospital Antwerp, 2018 Antwerp, Belgium
| | - Michael Schwaiger
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, 8036 Graz, Austria; (J.W.); (J.E.); (M.F.); (W.Z.); (M.P.)
- Correspondence: ; Tel.: +43-(0)316-385-80722
| | - Sarah-Jayne Edmondson
- Department of Plastic and Reconstructive Surgery, Guy’s and St. Thomas’ Hospital, London SE1 7EH, UK;
| | - Irene Mischak
- University Clinic of Dental Medicine and Oral Health, Medical University of Graz, 8036 Graz, Austria;
| | - Jan Egger
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, 8036 Graz, Austria; (J.W.); (J.E.); (M.F.); (W.Z.); (M.P.)
- Institute for Computer Graphics and Vision, Graz University of Technology, 8036 Graz, Austria
| | - Matthias Feichtinger
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, 8036 Graz, Austria; (J.W.); (J.E.); (M.F.); (W.Z.); (M.P.)
| | - Wolfgang Zemann
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, 8036 Graz, Austria; (J.W.); (J.E.); (M.F.); (W.Z.); (M.P.)
| | - Mauro Pau
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, 8036 Graz, Austria; (J.W.); (J.E.); (M.F.); (W.Z.); (M.P.)
| |
Collapse
|
43
|
Li J, von Campe G, Pepe A, Gsaxner C, Wang E, Chen X, Zefferer U, Tödtling M, Krall M, Deutschmann H, Schäfer U, Schmalstieg D, Egger J. Automatic skull defect restoration and cranial implant generation for cranioplasty. Med Image Anal 2021; 73:102171. [PMID: 34340106 DOI: 10.1016/j.media.2021.102171] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 07/09/2021] [Accepted: 07/12/2021] [Indexed: 11/25/2022]
Abstract
A fast and fully automatic design of 3D printed patient-specific cranial implants is highly desired in cranioplasty - the process to restore a defect on the skull. We formulate skull defect restoration as a 3D volumetric shape completion task, where a partial skull volume is completed automatically. The difference between the completed skull and the partial skull is the restored defect; in other words, the implant that can be used in cranioplasty. To fulfill the task of volumetric shape completion, a fully data-driven approach is proposed. Supervised skull shape learning is performed on a database containing 167 high-resolution healthy skulls. In these skulls, synthetic defects are injected to create training and evaluation data pairs. We propose a patch-based training scheme tailored for dealing with high-resolution and spatially sparse data, which overcomes the disadvantages of conventional patch-based training methods in high-resolution volumetric shape completion tasks. In particular, the conventional patch-based training is applied to images of high resolution and proves to be effective in tasks such as segmentation. However, we demonstrate the limitations of conventional patch-based training for shape completion tasks, where the overall shape distribution of the target has to be learnt, since it cannot be captured efficiently by a sub-volume cropped from the target. Additionally, the standard dense implementation of a convolutional neural network tends to perform poorly on sparse data, such as the skull, which has a low voxel occupancy rate. Our proposed training scheme encourages a convolutional neural network to learn from the high-resolution and spatially sparse data. In our study, we show that our deep learning models, trained on healthy skulls with synthetic defects, can be transferred directly to craniotomy skulls with real defects of greater irregularity, and the results show promise for clinical use. Project page: https://github.com/Jianningli/MIA.
Collapse
Affiliation(s)
- Jianning Li
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz 8010, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, Graz 8010, Austria; Research Unit Experimental Neurotraumatology, Department of Neurosurgery, Medical University Graz, Auenbruggerplatz 2(2), Graz 8036, Austria.
| | - Gord von Campe
- Department of Neurosurgery, Medical University of Graz, Auenbruggerplatz 29, Graz, Austria
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz 8010, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, Graz 8010, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz 8010, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, Graz 8010, Austria
| | - Enpeng Wang
- School of Mechanical Engineering, Shanghai Jiao Tong University, Minhang District, Shanghai 200240, China
| | - Xiaojun Chen
- School of Mechanical Engineering, Shanghai Jiao Tong University, Minhang District, Shanghai 200240, China
| | - Ulrike Zefferer
- Research Unit Experimental Neurotraumatology, Department of Neurosurgery, Medical University Graz, Auenbruggerplatz 2(2), Graz 8036, Austria
| | - Martin Tödtling
- Research Unit Experimental Neurotraumatology, Department of Neurosurgery, Medical University Graz, Auenbruggerplatz 2(2), Graz 8036, Austria
| | - Marcell Krall
- Research Unit Experimental Neurotraumatology, Department of Neurosurgery, Medical University Graz, Auenbruggerplatz 2(2), Graz 8036, Austria
| | - Hannes Deutschmann
- Department of Radiology, Medical University of Graz, Auenbruggerplatz 9, Graz 8036, Austria
| | - Ute Schäfer
- Research Unit Experimental Neurotraumatology, Department of Neurosurgery, Medical University Graz, Auenbruggerplatz 2(2), Graz 8036, Austria; BioTechMed-Graz, Mozartgasse 12/II, Graz 8010, Austria
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz 8010, Austria; BioTechMed-Graz, Mozartgasse 12/II, Graz 8010, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz 8010, Austria; Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 2, Graz 8036, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, Graz 8010, Austria.
| |
Collapse
|
44
|
Pepe A, Trotta GF, Mohr-Ziak P, Gsaxner C, Wallner J, Bevilacqua V, Egger J. A Marker-Less Registration Approach for Mixed Reality-Aided Maxillofacial Surgery: a Pilot Evaluation. J Digit Imaging 2021; 32:1008-1018. [PMID: 31485953 DOI: 10.1007/s10278-019-00272-6] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
As of common routine in tumor resections, surgeons rely on local examinations of the removed tissues and on the swiftly made microscopy findings of the pathologist, which are based on intraoperatively taken tissue probes. This approach may imply an extended duration of the operation, increased effort for the medical staff, and longer occupancy of the operating room (OR). Mixed reality technologies, and particularly augmented reality, have already been applied in surgical scenarios with positive initial outcomes. Nonetheless, these methods have used manual or marker-based registration. In this work, we design an application for a marker-less registration of PET-CT information for a patient. The algorithm combines facial landmarks extracted from an RGB video stream, and the so-called Spatial-Mapping API provided by the HMD Microsoft HoloLens. The accuracy of the system is compared with a marker-based approach, and the opinions of field specialists have been collected during a demonstration. A survey based on the standard ISO-9241/110 has been designed for this purpose. The measurements show an average positioning error along the three axes of (x, y, z) = (3.3 ± 2.3, - 4.5 ± 2.9, - 9.3 ± 6.1) mm. Compared with the marker-based approach, this shows an increment of the positioning error of approx. 3 mm along two dimensions (x, y), which might be due to the absence of explicit markers. The application has been positively evaluated by the specialists; they have shown interest in continued further work and contributed to the development process with constructive criticism.
Collapse
Affiliation(s)
- Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Austria. .,Computer Algorithms for Medicine Laboratory, Graz, Austria.
| | - Gianpaolo Francesco Trotta
- Computer Algorithms for Medicine Laboratory, Graz, Austria.,Department of Mechanics, Mathematics and Management, Polytechnic University of Bari, Via Orabona, 4, Bari, Italy
| | - Peter Mohr-Ziak
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Austria.,VRVis-Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, 1220, Vienna, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Austria.,Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Jürgen Wallner
- Computer Algorithms for Medicine Laboratory, Graz, Austria.,Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036, Graz, Styria, Austria
| | - Vitoantonio Bevilacqua
- Department of Electrical and Information Engineering, Polytechnic University of Bari, Via Orabona, 4, Bari, Italy
| | - Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Austria.,Computer Algorithms for Medicine Laboratory, Graz, Austria.,Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036, Graz, Styria, Austria
| |
Collapse
|
45
|
Ricciuti B, Arbour K, Lin J, Vajdi A, Tolstorukov M, Hong L, Zhang J, Vokes N, Li Y, Spurr L, Cherniack A, Recondo G, Lamberti G, Rizvi H, Egger J, Plodkowski A, Khosrowjerdi S, Digumarthy S, Vaz N, Park H, Nishino M, Sholl L, Barbie D, Altan M, Heymach J, Skoulidis F, Gainor J, Hellmann M, Awad M. P14.26 Diminished Efficacy of PD-(L)1 Inhibition in STK11- and KEAP1-Mutant Lung Adenocarcinoma is Impacted by KRAS Mutation Status. J Thorac Oncol 2021. [DOI: 10.1016/j.jtho.2021.01.532] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
46
|
Gsaxner C, Pepe A, Li J, Ibrahimpasic U, Wallner J, Schmalstieg D, Egger J. Augmented Reality for Head and Neck Carcinoma Imaging: Description and Feasibility of an Instant Calibration, Markerless Approach. Comput Methods Programs Biomed 2021; 200:105854. [PMID: 33261944 DOI: 10.1016/j.cmpb.2020.105854] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 11/16/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Augmented reality (AR) can help to overcome current limitations in computer assisted head and neck surgery by granting "X-ray vision" to physicians. Still, the acceptance of AR in clinical applications is limited by technical and clinical challenges. We aim to demonstrate the benefit of a marker-free, instant calibration AR system for head and neck cancer imaging, which we hypothesize to be acceptable and practical for clinical use. METHODS We implemented a novel AR system for visualization of medical image data registered with the head or face of the patient prior to intervention. Our system allows the localization of head and neck carcinoma in relation to the outer anatomy. Our system does not require markers or stationary infrastructure, provides instant calibration and allows 2D and 3D multi-modal visualization for head and neck surgery planning via an AR head-mounted display. We evaluated our system in a pre-clinical user study with eleven medical experts. RESULTS Medical experts rated our application with a system usability scale score of 74.8 ± 15.9, which signifies above average, good usability and clinical acceptance. An average of 12.7 ± 6.6 minutes of training time was needed by physicians, before they were able to navigate the application without assistance. CONCLUSIONS Our AR system is characterized by a slim and easy setup, short training time and high usability and acceptance. Therefore, it presents a promising, novel tool for visualizing head and neck cancer imaging and pre-surgical localization of target structures.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5, 8036 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria.
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria
| | - Jianning Li
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria
| | - Una Ibrahimpasic
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Jürgen Wallner
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5, 8036 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria; Department of Cranio-Maxillofacial Surgery, AZ Monica Hospital Antwerp and Antwerp University Hospital, Antwerp, Belgium.
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5, 8036 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, 8010 Graz, Austria.
| |
Collapse
|
47
|
Kodym O, Li J, Pepe A, Gsaxner C, Chilamkurthy S, Egger J, Španěl M. SkullBreak / SkullFix - Dataset for automatic cranial implant design and a benchmark for volumetric shape learning tasks. Data Brief 2021; 35:106902. [PMID: 33997188 PMCID: PMC8100897 DOI: 10.1016/j.dib.2021.106902] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Revised: 02/17/2021] [Accepted: 02/18/2021] [Indexed: 11/19/2022] Open
Abstract
The article introduces two complementary datasets intended for the development of data-driven solutions for cranial implant design, which remains to be a time-consuming and laborious task in current clinical routine of cranioplasty. The two datasets, referred to as the SkullBreak and SkullFix in this article, are both adapted from a public head CT collection CQ500 (http://headctstudy.qure.ai/dataset) with CC BY-NC-SA 4.0 license. The SkullBreak contains 114 and 20 complete skulls, each accompanied by five defective skulls and the corresponding cranial implants, for training and evaluation respectively. The SkullFix contains 100 triplets (complete skull, defective skull and the implant) for training and 110 triplets for evaluation. The SkullFix dataset was first used in the MICCAI 2020 AutoImplant Challenge (https://autoimplant.grand-challenge.org/) and the ground truth, i.e., the complete skulls and the implants in the evaluation set are held private by the organizers. The two datasets are not overlapping and differ regarding data selection and synthetic defect creation and each serves as a complement to the other. Besides cranial implant design, the datasets can be used for the evaluation of volumetric shape learning algorithms, such as volumetric shape completion. This article gives a description of the two datasets in detail.
Collapse
Affiliation(s)
- Oldřich Kodym
- Brno University of Technology (BUT), Brno, Czech Republic
| | - Jianning Li
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
| | - Antonio Pepe
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
| | - Christina Gsaxner
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
| | | | - Jan Egger
- Graz University of Technology (TU Graz), Graz, Styria, Austria
- Computer Algorithms for Medicine Laboratory (Café Lab), Graz, Styria, Austria
- Medical University of Graz (MedUni Graz), Graz, Styria, Austria
- Corresponding author. @DrJanEgger
| | - Michal Španěl
- Brno University of Technology (BUT), Brno, Czech Republic
| |
Collapse
|
48
|
Li J, Gsaxner C, Pepe A, Morais A, Alves V, von Campe G, Wallner J, Egger J. Synthetic skull bone defects for automatic patient-specific craniofacial implant design. Sci Data 2021; 8:36. [PMID: 33514740 PMCID: PMC7846796 DOI: 10.1038/s41597-021-00806-0] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Accepted: 12/03/2020] [Indexed: 11/09/2022] Open
Abstract
Patient-specific craniofacial implants are used to repair skull bone defects after trauma or surgery. Currently, cranial implants are designed and produced by third-party suppliers, which is usually time-consuming and expensive. Recent advances in additive manufacturing made the in-hospital or in-operation-room fabrication of personalized implants feasible. However, the implants are still manufactured by external companies. To facilitate an optimized workflow, fast and automatic implant manufacturing is highly desirable. Data-driven approaches, such as deep learning, show currently great potential towards automatic implant design. However, a considerable amount of data is needed to train such algorithms, which is, especially in the medical domain, often a bottleneck. Therefore, we present CT-imaging data of the craniofacial complex from 24 patients, in which we injected various artificial cranial defects, resulting in 240 data pairs and 240 corresponding implants. Based on this work, automatic implant design and manufacturing processes can be trained. Additionally, the data of this work build a solid base for researchers to work on automatic cranial implant designs. Measurement(s) | Image Acquisition Matrix Size • Image Slice Thickness • craniofacial region | Technology Type(s) | imaging technique • computed tomography | Sample Characteristic - Organism | Homo sapiens |
Machine-accessible metadata file describing the reported data: 10.6084/m9.figshare.13265225
Collapse
Affiliation(s)
- Jianning Li
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, 8010, Graz, Austria.,Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christina Gsaxner
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, 8010, Graz, Austria.,Computer Algorithms for Medicine Laboratory, Graz, Austria.,Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 6/1, 8036, Graz, Austria
| | - Antonio Pepe
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, 8010, Graz, Austria.,Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Ana Morais
- Department of Informatics, School of Engineering, University of Minho, Braga, Portugal.,Algoritmi Centre, University of Minho, Braga, Portugal
| | - Victor Alves
- Algoritmi Centre, University of Minho, Braga, Portugal
| | - Gord von Campe
- Department of Neurosurgery, Medical University of Graz, Auenbruggerplatz 29, 8036, Graz, Austria
| | - Jürgen Wallner
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 6/1, 8036, Graz, Austria.
| | - Jan Egger
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, 8010, Graz, Austria. .,Computer Algorithms for Medicine Laboratory, Graz, Austria. .,Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 6/1, 8036, Graz, Austria.
| |
Collapse
|
49
|
Lungu AJ, Swinkels W, Claesen L, Tu P, Egger J, Chen X. A review on the applications of virtual reality, augmented reality and mixed reality in surgical simulation: an extension to different kinds of surgery. Expert Rev Med Devices 2020; 18:47-62. [PMID: 33283563 DOI: 10.1080/17434440.2021.1860750] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Background: Research proves that the apprenticeship model, which is the gold standard for training surgical residents, is obsolete. For that reason, there is a continuing effort toward the development of high-fidelity surgical simulators to replace the apprenticeship model. Applying Virtual Reality Augmented Reality (AR) and Mixed Reality (MR) in surgical simulators increases the fidelity, level of immersion and overall experience of these simulators.Areas covered: The objective of this review is to provide a comprehensive overview of the application of VR, AR and MR for distinct surgical disciplines, including maxillofacial surgery and neurosurgery. The current developments in these areas, as well as potential future directions, are discussed.Expert opinion: The key components for incorporating VR into surgical simulators are visual and haptic rendering. These components ensure that the user is completely immersed in the virtual environment and can interact in the same way as in the physical world. The key components for the application of AR and MR into surgical simulators include the tracking system as well as the visual rendering. The advantages of these surgical simulators are the ability to perform user evaluations and increase the training frequency of surgical residents.
Collapse
Affiliation(s)
- Abel J Lungu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Wout Swinkels
- Computational Sensing Systems, Department of Engineering Technology, Hasselt University, Diepenbeek, Belgium
| | - Luc Claesen
- Computational Sensing Systems, Department of Engineering Technology, Hasselt University, Diepenbeek, Belgium
| | - Puxun Tu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jan Egger
- Graz University of Technology, Institute of Computer Graphics and Vision, Graz, Austria.,Graz Department of Oral &maxillofacial Surgery, Medical University of Graz, Graz, Austria.,The Laboratory of Computer Algorithms for Medicine, Medical University of Graz, Graz, Austria
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
50
|
Pepe A, Li J, Rolf-Pissarczyk M, Gsaxner C, Chen X, Holzapfel GA, Egger J. Detection, segmentation, simulation and visualization of aortic dissections: A review. Med Image Anal 2020; 65:101773. [DOI: 10.1016/j.media.2020.101773] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 06/01/2020] [Accepted: 07/06/2020] [Indexed: 12/16/2022]
|