1
|
Yap MH, Cassidy B, Byra M, Liao TY, Yi H, Galdran A, Chen YH, Brüngel R, Koitka S, Friedrich CM, Lo YW, Yang CH, Li K, Lao Q, Ballester MAG, Carneiro G, Ju YJ, Huang JD, Pappachan JM, Reeves ND, Chandrabalan V, Dancey D, Kendrick C. Diabetic foot ulcers segmentation challenge report: Benchmark and analysis. Med Image Anal 2024; 94:103153. [PMID: 38569380 DOI: 10.1016/j.media.2024.103153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 01/30/2024] [Accepted: 03/20/2024] [Indexed: 04/05/2024]
Abstract
Monitoring the healing progress of diabetic foot ulcers is a challenging process. Accurate segmentation of foot ulcers can help podiatrists to quantitatively measure the size of wound regions to assist prediction of healing status. The main challenge in this field is the lack of publicly available manual delineation, which can be time consuming and laborious. Recently, methods based on deep learning have shown excellent results in automatic segmentation of medical images, however, they require large-scale datasets for training, and there is limited consensus on which methods perform the best. The 2022 Diabetic Foot Ulcers segmentation challenge was held in conjunction with the 2022 International Conference on Medical Image Computing and Computer Assisted Intervention, which sought to address these issues and stimulate progress in this research domain. A training set of 2000 images exhibiting diabetic foot ulcers was released with corresponding segmentation ground truth masks. Of the 72 (approved) requests from 47 countries, 26 teams used this data to develop fully automated systems to predict the true segmentation masks on a test set of 2000 images, with the corresponding ground truth segmentation masks kept private. Predictions from participating teams were scored and ranked according to their average Dice similarity coefficient of the ground truth masks and prediction masks. The winning team achieved a Dice of 0.7287 for diabetic foot ulcer segmentation. This challenge has now entered a live leaderboard stage where it serves as a challenging benchmark for diabetic foot ulcer segmentation.
Collapse
Affiliation(s)
- Moi Hoon Yap
- Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester M1 5GD, United Kingdom; Lancashire Teaching Hospitals NHS Trust, Preston, PR2 9HT, United Kingdom.
| | - Bill Cassidy
- Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester M1 5GD, United Kingdom
| | - Michal Byra
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland; RIKEN Center for Brain Science, Wako, Japan
| | - Ting-Yu Liao
- Department of Computer Science, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan
| | - Huahui Yi
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Adrian Galdran
- BCN Medtech, Universitat Pompeu Fabra, Barcelona, Spain; AIML, University of Adelaide, Australia
| | - Yung-Han Chen
- Institute of Electronics, National Yang Ming Chiao Tung University, No. 1001, University Road, Hsinchu 300, Taiwan
| | - Raphael Brüngel
- Department of Computer Science, University of Applied Sciences and Arts Dortmund (FH Dortmund), Emil-Figge-Str. 42, 44227 Dortmund, Germany; Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), University Hospital Essen, Zweigertstr. 37, 45130 Essen, Germany; Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstr. 2, 45131 Essen, Germany
| | - Sven Koitka
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Girardetstr. 2, 45131 Essen, Germany; Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstr. 55, 45147 Essen, Germany
| | - Christoph M Friedrich
- Department of Computer Science, University of Applied Sciences and Arts Dortmund (FH Dortmund), Emil-Figge-Str. 42, 44227 Dortmund, Germany; Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), University Hospital Essen, Zweigertstr. 37, 45130 Essen, Germany
| | - Yu-Wen Lo
- Department of Computer Science, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan
| | - Ching-Hui Yang
- Department of Computer Science, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan
| | - Kang Li
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Qicheng Lao
- School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | | | | | - Yi-Jen Ju
- Institute of Electronics, National Yang Ming Chiao Tung University, No. 1001, University Road, Hsinchu 300, Taiwan
| | - Juinn-Dar Huang
- Institute of Electronics, National Yang Ming Chiao Tung University, No. 1001, University Road, Hsinchu 300, Taiwan
| | - Joseph M Pappachan
- Lancashire Teaching Hospitals NHS Trust, Preston, PR2 9HT, United Kingdom; Department of Life Sciences, Manchester Metropolitan University, Manchester, M1 5GD, United Kingdom
| | - Neil D Reeves
- Department of Life Sciences, Manchester Metropolitan University, Manchester, M1 5GD, United Kingdom
| | | | - Darren Dancey
- Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester M1 5GD, United Kingdom
| | - Connah Kendrick
- Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester M1 5GD, United Kingdom
| |
Collapse
|
2
|
Elfer K, Gardecki E, Garcia V, Ly A, Hytopoulos E, Wen S, Hanna MG, Peeters DJE, Saltz J, Ehinger A, Dudgeon SN, Li X, Blenman KRM, Chen W, Green U, Birmingham R, Pan T, Lennerz JK, Salgado R, Gallas BD. Reproducible Reporting of the Collection and Evaluation of Annotations for Artificial Intelligence Models. Mod Pathol 2024; 37:100439. [PMID: 38286221 DOI: 10.1016/j.modpat.2024.100439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 12/14/2023] [Accepted: 01/21/2024] [Indexed: 01/31/2024]
Abstract
This work puts forth and demonstrates the utility of a reporting framework for collecting and evaluating annotations of medical images used for training and testing artificial intelligence (AI) models in assisting detection and diagnosis. AI has unique reporting requirements, as shown by the AI extensions to the Consolidated Standards of Reporting Trials (CONSORT) and Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) checklists and the proposed AI extensions to the Standards for Reporting Diagnostic Accuracy (STARD) and Transparent Reporting of a Multivariable Prediction model for Individual Prognosis or Diagnosis (TRIPOD) checklists. AI for detection and/or diagnostic image analysis requires complete, reproducible, and transparent reporting of the annotations and metadata used in training and testing data sets. In an earlier work by other researchers, an annotation workflow and quality checklist for computational pathology annotations were proposed. In this manuscript, we operationalize this workflow into an evaluable quality checklist that applies to any reader-interpreted medical images, and we demonstrate its use for an annotation effort in digital pathology. We refer to this quality framework as the Collection and Evaluation of Annotations for Reproducible Reporting of Artificial Intelligence (CLEARR-AI).
Collapse
Affiliation(s)
- Katherine Elfer
- United States Food and Drug Administration, Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, Division of Imaging Diagnostics and Software Reliability, Silver Spring, Maryland; National Institutes of Health, National Cancer Institute, Division of Cancer Prevention, Cancer Prevention Fellowship Program, Bethesda, Maryland.
| | - Emma Gardecki
- United States Food and Drug Administration, Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, Division of Imaging Diagnostics and Software Reliability, Silver Spring, Maryland
| | - Victor Garcia
- United States Food and Drug Administration, Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, Division of Imaging Diagnostics and Software Reliability, Silver Spring, Maryland
| | - Amy Ly
- Department of Pathology, Massachusetts General Hospital, Boston, Massachusetts
| | | | - Si Wen
- United States Food and Drug Administration, Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, Division of Imaging Diagnostics and Software Reliability, Silver Spring, Maryland
| | - Matthew G Hanna
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Dieter J E Peeters
- Department of Pathology, University Hospital Antwerp/University of Antwerp, Antwerp, Belgium; Department of Pathology, Sint-Maarten Hospital, Mechelen, Belgium
| | - Joel Saltz
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York
| | - Anna Ehinger
- Department of Clinical Genetics, Pathology and Molecular Diagnostics, Laboratory Medicine, Lund University, Lund, Sweden
| | - Sarah N Dudgeon
- Department of Laboratory Medicine, Yale School of Medicine, New Haven, Connecticut
| | - Xiaoxian Li
- Department of Pathology and Laboratory Medicine, Emory University School of Medicine, Atlanta, Georgia
| | - Kim R M Blenman
- Department of Internal Medicine, Section of Medical Oncology, Yale School of Medicine and Yale Cancer Center, Yale University, New Haven, Connecticut; Department of Computer Science, School of Engineering and Applied Science, Yale University, New Haven, Connecticut
| | - Weijie Chen
- United States Food and Drug Administration, Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, Division of Imaging Diagnostics and Software Reliability, Silver Spring, Maryland
| | - Ursula Green
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, Georgia
| | - Ryan Birmingham
- United States Food and Drug Administration, Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, Division of Imaging Diagnostics and Software Reliability, Silver Spring, Maryland; Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, Georgia
| | - Tony Pan
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, Georgia
| | - Jochen K Lennerz
- Department of Pathology, Center for Integrated Diagnostics, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Roberto Salgado
- Division of Research, Peter Mac Callum Cancer Centre, Melbourne, Australia; Department of Pathology, GZA-ZNA Hospitals, Antwerp, Belgium
| | - Brandon D Gallas
- United States Food and Drug Administration, Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, Division of Imaging Diagnostics and Software Reliability, Silver Spring, Maryland
| |
Collapse
|
3
|
Qian B, Chen H, Wang X, Guan Z, Li T, Jin Y, Wu Y, Wen Y, Che H, Kwon G, Kim J, Choi S, Shin S, Krause F, Unterdechler M, Hou J, Feng R, Li Y, El Habib Daho M, Yang D, Wu Q, Zhang P, Yang X, Cai Y, Tan GSW, Cheung CY, Jia W, Li H, Tham YC, Wong TY, Sheng B. DRAC 2022: A public benchmark for diabetic retinopathy analysis on ultra-wide optical coherence tomography angiography images. Patterns (N Y) 2024; 5:100929. [PMID: 38487802 PMCID: PMC10935505 DOI: 10.1016/j.patter.2024.100929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 12/09/2023] [Accepted: 01/15/2024] [Indexed: 03/17/2024]
Abstract
We described a challenge named "DRAC - Diabetic Retinopathy Analysis Challenge" in conjunction with the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2022). Within this challenge, we provided the DRAC datset, an ultra-wide optical coherence tomography angiography (UW-OCTA) dataset (1,103 images), addressing three primary clinical tasks: diabetic retinopathy (DR) lesion segmentation, image quality assessment, and DR grading. The scientific community responded positively to the challenge, with 11, 12, and 13 teams submitting different solutions for these three tasks, respectively. This paper presents a concise summary and analysis of the top-performing solutions and results across all challenge tasks. These solutions could provide practical guidance for developing accurate classification and segmentation models for image quality assessment and DR diagnosis using UW-OCTA images, potentially improving the diagnostic capabilities of healthcare professionals. The dataset has been released to support the development of computer-aided diagnostic systems for DR evaluation.
Collapse
Affiliation(s)
- Bo Qian
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong 999077, China
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong 999077, China
| | - Xiangning Wang
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China
- Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai 200233, China
| | - Zhouyu Guan
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China
| | - Tingyao Li
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yixiao Jin
- Tsinghua Medicine, Tsinghua University, Beijing 100084, China
| | - Yilan Wu
- Tsinghua Medicine, Tsinghua University, Beijing 100084, China
| | - Yang Wen
- School of Electronic and Information Engineering, Shenzhen University, Shenzhen 518060, China
| | - Haoxuan Che
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong 999077, China
| | | | | | - Sungjin Choi
- AI/DX Convergence Business Group, KT, Seongnam 13606, Korea
| | - Seoyoung Shin
- AI/DX Convergence Business Group, KT, Seongnam 13606, Korea
| | - Felix Krause
- Johannes Kepler University Linz, Linz 4040, Austria
| | | | - Junlin Hou
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai 200433, China
| | - Rui Feng
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai 200433, China
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
| | - Yihao Li
- LaTIM UMR 1101, INSERM, 29609 Brest, France
- University of Western Brittany, 29238 Brest, France
| | - Mostafa El Habib Daho
- LaTIM UMR 1101, INSERM, 29609 Brest, France
- University of Western Brittany, 29238 Brest, France
| | - Dawei Yang
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong 999077, China
| | - Qiang Wu
- Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai 200233, China
| | - Ping Zhang
- Department of Computer Science and Engineering, The Ohio State University, Columbus, OH 43210, USA
- Department of Biomedical Informatics, The Ohio State University, Columbus, OH 43210, USA
- Translational Data Analytics Institute, The Ohio State University, Columbus, OH 43210, USA
| | - Xiaokang Yang
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yiyu Cai
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Gavin Siew Wei Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
| | - Carol Y. Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong 999077, China
| | - Weiping Jia
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China
| | - Huating Li
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China
| | - Yih Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
- Centre for Innovation and Precision Eye Health; and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 119228, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore 169857, Singapore
| | - Tien Yin Wong
- Tsinghua Medicine, Tsinghua University, Beijing 100084, China
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 168751, Singapore
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing 102218, China
| | - Bin Sheng
- Shanghai Belt and Road International Joint Laboratory for Intelligent Prevention and Treatment of Metabolic Disorders, Department of Computer Science and Engineering, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Department of Endocrinology and Metabolism, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Diabetes Institute, Shanghai Clinical Center for Diabetes, Shanghai 200240, China
- MOE Key Laboratory of AI, School of Electronic, Information, and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
4
|
Welcome to the second issue of the Journal of Medical Imaging (JMI) for the 2024 year! J Med Imaging (Bellingham) 2024; 11:020101. [PMID: 38690229 DOI: 10.1117/1.JMI.11.2.020101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024] Open
Abstract
Editor-in-Chief Bennett A. Landman (Vanderbilt University) provides opening remarks for the current issue of JMI, with specific commentary on medical imaging community "challenges" and their potential to coalesce creative energies.
Collapse
|
5
|
Bano S, Casella A, Vasconcelos F, Qayyum A, Benzinou A, Mazher M, Meriaudeau F, Lena C, Cintorrino IA, De Paolis GR, Biagioli J, Grechishnikova D, Jiao J, Bai B, Qiao Y, Bhattarai B, Gaire RR, Subedi R, Vazquez E, Płotka S, Lisowska A, Sitek A, Attilakos G, Wimalasundera R, David AL, Paladini D, Deprest J, De Momi E, Mattos LS, Moccia S, Stoyanov D. Placental vessel segmentation and registration in fetoscopy: Literature review and MICCAI FetReg2021 challenge findings. Med Image Anal 2024; 92:103066. [PMID: 38141453 DOI: 10.1016/j.media.2023.103066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 11/27/2023] [Accepted: 12/19/2023] [Indexed: 12/25/2023]
Abstract
Fetoscopy laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS). The procedure involves photocoagulation pathological anastomoses to restore a physiological blood exchange among twins. The procedure is particularly challenging, from the surgeon's side, due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility due to amniotic fluid turbidity, and variability in illumination. These challenges may lead to increased surgery time and incomplete ablation of pathological anastomoses, resulting in persistent TTTS. Computer-assisted intervention (CAI) can provide TTTS surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking. Research in this domain has been hampered by the lack of high-quality data to design, develop and test CAI algorithms. Through the Fetoscopic Placental Vessel Segmentation and Registration (FetReg2021) challenge, which was organized as part of the MICCAI2021 Endoscopic Vision (EndoVis) challenge, we released the first large-scale multi-center TTTS dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms with a focus on creating drift-free mosaics from long duration fetoscopy videos. For this challenge, we released a dataset of 2060 images, pixel-annotated for vessels, tool, fetus and background classes, from 18 in-vivo TTTS fetoscopy procedures and 18 short video clips of an average length of 411 frames for developing placental scene segmentation and frame registration for mosaicking techniques. Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fetoscopic procedures and 6 short clips. For the segmentation task, overall baseline performed was the top performing (aggregated mIoU of 0.6763) and was the best on the vessel class (mIoU of 0.5817) while team RREB was the best on the tool (mIoU of 0.6335) and fetus (mIoU of 0.5178) classes. For the registration task, overall the baseline performed better than team SANO with an overall mean 5-frame SSIM of 0.9348. Qualitatively, it was observed that team SANO performed better in planar scenarios, while baseline was better in non-planner scenarios. The detailed analysis showed that no single team outperformed on all 6 test fetoscopic videos. The challenge provided an opportunity to create generalized solutions for fetoscopic scene understanding and mosaicking. In this paper, we present the findings of the FetReg2021 challenge, alongside reporting a detailed literature review for CAI in TTTS fetoscopy. Through this challenge, its analysis and the release of multi-center fetoscopic data, we provide a benchmark for future research in this field.
Collapse
Affiliation(s)
- Sophia Bano
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, UK.
| | - Alessandro Casella
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Italy; Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | - Francisco Vasconcelos
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, UK
| | | | | | - Moona Mazher
- Department of Computer Engineering and Mathematics, University Rovira i Virgili, Spain
| | | | - Chiara Lena
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | | | - Gaia Romana De Paolis
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | - Jessica Biagioli
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | | | | | - Bizhe Bai
- Medical Computer Vision and Robotics Group, Department of Mathematical and Computational Sciences, University of Toronto, Canada
| | - Yanyan Qiao
- Shanghai MicroPort MedBot (Group) Co., Ltd, China
| | - Binod Bhattarai
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, UK
| | | | - Ronast Subedi
- NepAL Applied Mathematics and Informatics Institute for Research, Nepal
| | | | - Szymon Płotka
- Sano Center for Computational Medicine, Poland; Quantitative Healthcare Analysis Group, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
| | | | - Arkadiusz Sitek
- Sano Center for Computational Medicine, Poland; Center for Advanced Medical Computing and Simulation, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - George Attilakos
- Fetal Medicine Unit, Elizabeth Garrett Anderson Wing, University College London Hospital, UK; EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, UK
| | - Ruwan Wimalasundera
- Fetal Medicine Unit, Elizabeth Garrett Anderson Wing, University College London Hospital, UK; EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, UK
| | - Anna L David
- Fetal Medicine Unit, Elizabeth Garrett Anderson Wing, University College London Hospital, UK; EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, UK; Department of Development and Regeneration, University Hospital Leuven, Belgium
| | - Dario Paladini
- Department of Fetal and Perinatal Medicine, Istituto "Giannina Gaslini", Italy
| | - Jan Deprest
- EGA Institute for Women's Health, Faculty of Population Health Sciences, University College London, UK; Department of Development and Regeneration, University Hospital Leuven, Belgium
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy
| | - Leonardo S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Italy
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS) and Department of Computer Science, University College London, UK
| |
Collapse
|
6
|
Sudre CH, Van Wijnen K, Dubost F, Adams H, Atkinson D, Barkhof F, Birhanu MA, Bron EE, Camarasa R, Chaturvedi N, Chen Y, Chen Z, Chen S, Dou Q, Evans T, Ezhov I, Gao H, Girones Sanguesa M, Gispert JD, Gomez Anson B, Hughes AD, Ikram MA, Ingala S, Jaeger HR, Kofler F, Kuijf HJ, Kutnar D, Lee M, Li B, Lorenzini L, Menze B, Molinuevo JL, Pan Y, Puybareau E, Rehwald R, Su R, Shi P, Smith L, Tillin T, Tochon G, Urien H, van der Velden BHM, van der Velpen IF, Wiestler B, Wolters FJ, Yilmaz P, de Groot M, Vernooij MW, de Bruijne M. Where is VALDO? VAscular Lesions Detection and segmentatiOn challenge at MICCAI 2021. Med Image Anal 2024; 91:103029. [PMID: 37988921 DOI: 10.1016/j.media.2023.103029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 08/09/2023] [Accepted: 11/13/2023] [Indexed: 11/23/2023]
Abstract
Imaging markers of cerebral small vessel disease provide valuable information on brain health, but their manual assessment is time-consuming and hampered by substantial intra- and interrater variability. Automated rating may benefit biomedical research, as well as clinical assessment, but diagnostic reliability of existing algorithms is unknown. Here, we present the results of the VAscular Lesions DetectiOn and Segmentation (Where is VALDO?) challenge that was run as a satellite event at the international conference on Medical Image Computing and Computer Aided Intervention (MICCAI) 2021. This challenge aimed to promote the development of methods for automated detection and segmentation of small and sparse imaging markers of cerebral small vessel disease, namely enlarged perivascular spaces (EPVS) (Task 1), cerebral microbleeds (Task 2) and lacunes of presumed vascular origin (Task 3) while leveraging weak and noisy labels. Overall, 12 teams participated in the challenge proposing solutions for one or more tasks (4 for Task 1-EPVS, 9 for Task 2-Microbleeds and 6 for Task 3-Lacunes). Multi-cohort data was used in both training and evaluation. Results showed a large variability in performance both across teams and across tasks, with promising results notably for Task 1-EPVS and Task 2-Microbleeds and not practically useful results yet for Task 3-Lacunes. It also highlighted the performance inconsistency across cases that may deter use at an individual level, while still proving useful at a population level.
Collapse
Affiliation(s)
- Carole H Sudre
- MRC Unit for Lifelong Health and Ageing at UCL, Department of Population Science and Experimental Medicine, University College London, London, United Kingdom; Centre for Medical Image Computing, University College London, London, United Kingdom; School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom.
| | - Kimberlin Van Wijnen
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Florian Dubost
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Hieab Adams
- Department of Clinical Genetics and Radiology, Erasmus MC, Rotterdam, The Netherlands
| | - David Atkinson
- Centre for Medical Imaging, University College London, London, United Kingdom
| | - Frederik Barkhof
- Centre for Medical Image Computing, University College London, London, United Kingdom; Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, Amsterdam, The Netherlands
| | - Mahlet A Birhanu
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Esther E Bron
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Robin Camarasa
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Nish Chaturvedi
- MRC Unit for Lifelong Health and Ageing at UCL, Department of Population Science and Experimental Medicine, University College London, London, United Kingdom
| | - Yuan Chen
- Department of Radiology, University of Massachusetts Medical School, Worcester, USA
| | - Zihao Chen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Shuai Chen
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, China
| | - Tavia Evans
- Department of Clinical Genetics and Radiology, Erasmus MC, Rotterdam, The Netherlands
| | - Ivan Ezhov
- Department of Informatics, Technische Universitat Munchen, Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany
| | - Haojun Gao
- Department of Radiology, Zhejiang University, Hangzhou, China
| | | | - Juan Domingo Gispert
- Barcelonaß Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain; Hospital del Mar Medical Research Institute (IMIM), Barcelona, Spain; Centro de Investigación Biomédica en Red Bioingeniería, Biomateriales y Nanomedicina, (CIBER-BBN), Barcelona, Spain
| | | | - Alun D Hughes
- MRC Unit for Lifelong Health and Ageing at UCL, Department of Population Science and Experimental Medicine, University College London, London, United Kingdom
| | - M Arfan Ikram
- Department of Epidemiology, Erasmus MC, Rotterdam, The Netherlands
| | - Silvia Ingala
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, Amsterdam, The Netherlands
| | - H Rolf Jaeger
- Institute of Neurology, University College London, London, United Kingdom
| | - Florian Kofler
- Department of Informatics, Technische Universitat Munchen, Munich, Germany; Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany
| | - Hugo J Kuijf
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Denis Kutnar
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | - Bo Li
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Luigi Lorenzini
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, Amsterdam, The Netherlands
| | - Bjoern Menze
- Department of Informatics, Technische Universitat Munchen, Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland
| | - Jose Luis Molinuevo
- Barcelonaß Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain; H. Lundbeck A/S, Copenhagen, Denmark
| | - Yiwei Pan
- Department of Electronic and Information Engineering, Harbin Institute of Technology at Shenzhen, Shenzhen, China
| | | | - Rafael Rehwald
- Institute of Neurology, University College London, London, United Kingdom
| | - Ruisheng Su
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Pengcheng Shi
- Department of Electronic and Information Engineering, Harbin Institute of Technology at Shenzhen, Shenzhen, China
| | | | - Therese Tillin
- MRC Unit for Lifelong Health and Ageing at UCL, Department of Population Science and Experimental Medicine, University College London, London, United Kingdom
| | | | - Hélène Urien
- ISEP-Institut Supérieur d'Électronique de Paris, Issy-les-Moulineaux, France
| | | | - Isabelle F van der Velpen
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; Department of Epidemiology, Erasmus MC, Rotterdam, The Netherlands
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Germany
| | - Frank J Wolters
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; Department of Epidemiology, Erasmus MC, Rotterdam, The Netherlands
| | - Pinar Yilmaz
- Department of Epidemiology, Erasmus MC, Rotterdam, The Netherlands
| | - Marius de Groot
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; GlaxoSmithKline Research, Stevenage, United Kingdom
| | - Meike W Vernooij
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; Department of Epidemiology, Erasmus MC, Rotterdam, The Netherlands
| | - Marleen de Bruijne
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
7
|
Kumari V, Kumar N, Kumar K S, Kumar A, Skandha SS, Saxena S, Khanna NN, Laird JR, Singh N, Fouda MM, Saba L, Singh R, Suri JS. Deep Learning Paradigm and Its Bias for Coronary Artery Wall Segmentation in Intravascular Ultrasound Scans: A Closer Look. J Cardiovasc Dev Dis 2023; 10:485. [PMID: 38132653 PMCID: PMC10743870 DOI: 10.3390/jcdd10120485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 10/15/2023] [Accepted: 11/07/2023] [Indexed: 12/23/2023] Open
Abstract
BACKGROUND AND MOTIVATION Coronary artery disease (CAD) has the highest mortality rate; therefore, its diagnosis is vital. Intravascular ultrasound (IVUS) is a high-resolution imaging solution that can image coronary arteries, but the diagnosis software via wall segmentation and quantification has been evolving. In this study, a deep learning (DL) paradigm was explored along with its bias. METHODS Using a PRISMA model, 145 best UNet-based and non-UNet-based methods for wall segmentation were selected and analyzed for their characteristics and scientific and clinical validation. This study computed the coronary wall thickness by estimating the inner and outer borders of the coronary artery IVUS cross-sectional scans. Further, the review explored the bias in the DL system for the first time when it comes to wall segmentation in IVUS scans. Three bias methods, namely (i) ranking, (ii) radial, and (iii) regional area, were applied and compared using a Venn diagram. Finally, the study presented explainable AI (XAI) paradigms in the DL framework. FINDINGS AND CONCLUSIONS UNet provides a powerful paradigm for the segmentation of coronary walls in IVUS scans due to its ability to extract automated features at different scales in encoders, reconstruct the segmented image using decoders, and embed the variants in skip connections. Most of the research was hampered by a lack of motivation for XAI and pruned AI (PAI) models. None of the UNet models met the criteria for bias-free design. For clinical assessment and settings, it is necessary to move from a paper-to-practice approach.
Collapse
Affiliation(s)
- Vandana Kumari
- School of Computer Science and Engineering, Galgotias University, Greater Noida 201310, India; (V.K.); (S.K.K.)
| | - Naresh Kumar
- Department of Applied Computational Science and Engineering, G L Bajaj Institute of Technology and Management, Greater Noida 201310, India
| | - Sampath Kumar K
- School of Computer Science and Engineering, Galgotias University, Greater Noida 201310, India; (V.K.); (S.K.K.)
| | - Ashish Kumar
- School of CSET, Bennett University, Greater Noida 201310, India;
| | - Sanagala S. Skandha
- Department of CSE, CMR College of Engineering and Technology, Hyderabad 501401, India;
| | - Sanjay Saxena
- Department of Computer Science and Engineering, IIT Bhubaneswar, Bhubaneswar 751003, India;
| | - Narendra N. Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi 110076, India;
| | - John R. Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St Helena, CA 94574, USA;
| | - Narpinder Singh
- Department of Food Science and Technology, Graphic Era, Deemed to be University, Dehradun 248002, India;
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA;
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09100 Cagliari, Italy;
| | - Rajesh Singh
- Department of Research and Innovation, Uttaranchal Institute of Technology, Uttaranchal University, Dehradun 248007, India;
| | - Jasjit S. Suri
- Stroke Diagnostics and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA
- Department of Computer Science & Engineering, Graphic Era, Deemed to be University, Dehradun 248002, India
- Monitoring and Diagnosis Division, AtheroPoint™, Roseville, CA 95661, USA
| |
Collapse
|
8
|
Andrearczyk V, Oreiller V, Boughdad S, Le Rest CC, Tankyevych O, Elhalawani H, Jreige M, Prior JO, Vallières M, Visvikis D, Hatt M, Depeursinge A. Automatic Head and Neck Tumor segmentation and outcome prediction relying on FDG-PET/CT images: Findings from the second edition of the HECKTOR challenge. Med Image Anal 2023; 90:102972. [PMID: 37742374 DOI: 10.1016/j.media.2023.102972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 07/27/2023] [Accepted: 09/14/2023] [Indexed: 09/26/2023]
Abstract
By focusing on metabolic and morphological tissue properties respectively, FluoroDeoxyGlucose (FDG)-Positron Emission Tomography (PET) and Computed Tomography (CT) modalities include complementary and synergistic information for cancerous lesion delineation and characterization (e.g. for outcome prediction), in addition to usual clinical variables. This is especially true in Head and Neck Cancer (HNC). The goal of the HEad and neCK TumOR segmentation and outcome prediction (HECKTOR) challenge was to develop and compare modern image analysis methods to best extract and leverage this information automatically. We present here the post-analysis of HECKTOR 2nd edition, at the 24th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2021. The scope of the challenge was substantially expanded compared to the first edition, by providing a larger population (adding patients from a new clinical center) and proposing an additional task to the challengers, namely the prediction of Progression-Free Survival (PFS). To this end, the participants were given access to a training set of 224 cases from 5 different centers, each with a pre-treatment FDG-PET/CT scan and clinical variables. Their methods were subsequently evaluated on a held-out test set of 101 cases from two centers. For the segmentation task (Task 1), the ranking was based on a Borda counting of their ranks according to two metrics: mean Dice Similarity Coefficient (DSC) and median Hausdorff Distance at 95th percentile (HD95). For the PFS prediction task, challengers could use the tumor contours provided by experts (Task 3) or rely on their own (Task 2). The ranking was obtained according to the Concordance index (C-index) calculated on the predicted risk scores. A total of 103 teams registered for the challenge, for a total of 448 submissions and 29 papers. The best method in the segmentation task obtained an average DSC of 0.759, and the best predictions of PFS obtained a C-index of 0.717 (without relying on the provided contours) and 0.698 (using the expert contours). An interesting finding was that best PFS predictions were reached by relying on DL approaches (with or without explicit tumor segmentation, 4 out of the 5 best ranked) compared to standard radiomics methods using handcrafted features extracted from delineated tumors, and by exploiting alternative tumor contours (automated and/or larger volumes encompassing surrounding tissues) rather than relying on the expert contours. This second edition of the challenge confirmed the promising performance of fully automated primary tumor delineation in PET/CT images of HNC patients, although there is still a margin for improvement in some difficult cases. For the first time, the prediction of outcome was also addressed and the best methods reached relatively good performance (C-index above 0.7). Both results constitute another step forward toward large-scale outcome prediction studies in HNC.
Collapse
Affiliation(s)
- Vincent Andrearczyk
- Institute of Informatics, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland.
| | - Valentin Oreiller
- Institute of Informatics, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland; Faculty of Biology and Medicine, University of Lausanne, Lausanne, Switzerland
| | - Sarah Boughdad
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Catherine Cheze Le Rest
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France; Poitiers University Hospital, nuclear medicine, Poitiers, France
| | - Olena Tankyevych
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France; Poitiers University Hospital, nuclear medicine, Poitiers, France
| | - Hesham Elhalawani
- Cleveland Clinic Foundation, Department of Radiation Oncology, Cleveland, OH, United States of America
| | - Mario Jreige
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - John O Prior
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland; Faculty of Biology and Medicine, University of Lausanne, Lausanne, Switzerland
| | - Martin Vallières
- Department of Computer Science, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | | | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Adrien Depeursinge
- Institute of Informatics, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| |
Collapse
|
9
|
Nwoye CI, Yu T, Sharma S, Murali A, Alapatt D, Vardazaryan A, Yuan K, Hajek J, Reiter W, Yamlahi A, Smidt FH, Zou X, Zheng G, Oliveira B, Torres HR, Kondo S, Kasai S, Holm F, Özsoy E, Gui S, Li H, Raviteja S, Sathish R, Poudel P, Bhattarai B, Wang Z, Rui G, Schellenberg M, Vilaça JL, Czempiel T, Wang Z, Sheet D, Thapa SK, Berniker M, Godau P, Morais P, Regmi S, Tran TN, Fonseca J, Nölke JH, Lima E, Vazquez E, Maier-Hein L, Navab N, Mascagni P, Seeliger B, Gonzalez C, Mutter D, Padoy N. CholecTriplet2022: Show me a tool and tell me the triplet - An endoscopic vision challenge for surgical action triplet detection. Med Image Anal 2023; 89:102888. [PMID: 37451133 DOI: 10.1016/j.media.2023.102888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 06/23/2023] [Accepted: 06/28/2023] [Indexed: 07/18/2023]
Abstract
Formalizing surgical activities as triplets of the used instruments, actions performed, and target anatomies is becoming a gold standard approach for surgical activity modeling. The benefit is that this formalization helps to obtain a more detailed understanding of tool-tissue interaction which can be used to develop better Artificial Intelligence assistance for image-guided surgery. Earlier efforts and the CholecTriplet challenge introduced in 2021 have put together techniques aimed at recognizing these triplets from surgical footage. Estimating also the spatial locations of the triplets would offer a more precise intraoperative context-aware decision support for computer-assisted intervention. This paper presents the CholecTriplet2022 challenge, which extends surgical action triplet modeling from recognition to detection. It includes weakly-supervised bounding box localization of every visible surgical instrument (or tool), as the key actors, and the modeling of each tool-activity in the form of ‹instrument, verb, target› triplet. The paper describes a baseline method and 10 new deep learning algorithms presented at the challenge to solve the task. It also provides thorough methodological comparisons of the methods, an in-depth analysis of the obtained results across multiple metrics, visual and procedural challenges; their significance, and useful insights for future research directions and applications in surgery.
Collapse
Affiliation(s)
| | - Tong Yu
- ICube, University of Strasbourg, CNRS, France
| | | | | | | | | | - Kun Yuan
- ICube, University of Strasbourg, CNRS, France; Technical University Munich, Germany
| | | | | | - Amine Yamlahi
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Finn-Henri Smidt
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Xiaoyang Zou
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, China
| | - Bruno Oliveira
- 2Ai School of Technology, IPCA, Barcelos, Portugal; Life and Health Science Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; Algoritimi Center, School of Engineering, University of Minho, Guimeraes, Portugal
| | - Helena R Torres
- 2Ai School of Technology, IPCA, Barcelos, Portugal; Life and Health Science Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; Algoritimi Center, School of Engineering, University of Minho, Guimeraes, Portugal
| | | | | | | | - Ege Özsoy
- Technical University Munich, Germany
| | | | - Han Li
- Southern University of Science and Technology, China
| | | | | | | | | | | | | | - Melanie Schellenberg
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | | | | | - Zhenkun Wang
- Southern University of Science and Technology, China
| | | | - Shrawan Kumar Thapa
- Nepal Applied Mathematics and Informatics Institute for research (NAAMII), Nepal
| | | | - Patrick Godau
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Pedro Morais
- 2Ai School of Technology, IPCA, Barcelos, Portugal
| | - Sudarshan Regmi
- Nepal Applied Mathematics and Informatics Institute for research (NAAMII), Nepal
| | - Thuy Nuong Tran
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jaime Fonseca
- Algoritimi Center, School of Engineering, University of Minho, Guimeraes, Portugal
| | - Jan-Hinrich Nölke
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Estevão Lima
- Life and Health Science Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal
| | | | - Lena Maier-Hein
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Pietro Mascagni
- Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Barbara Seeliger
- ICube, University of Strasbourg, CNRS, France; University Hospital of Strasbourg, France; IHU Strasbourg, France
| | | | - Didier Mutter
- University Hospital of Strasbourg, France; IHU Strasbourg, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, France
| |
Collapse
|
10
|
Payette K, Li HB, de Dumast P, Licandro R, Ji H, Siddiquee MMR, Xu D, Myronenko A, Liu H, Pei Y, Wang L, Peng Y, Xie J, Zhang H, Dong G, Fu H, Wang G, Rieu Z, Kim D, Kim HG, Karimi D, Gholipour A, Torres HR, Oliveira B, Vilaça JL, Lin Y, Avisdris N, Ben-Zvi O, Bashat DB, Fidon L, Aertsen M, Vercauteren T, Sobotka D, Langs G, Alenyà M, Villanueva MI, Camara O, Fadida BS, Joskowicz L, Weibin L, Yi L, Xuesong L, Mazher M, Qayyum A, Puig D, Kebiri H, Zhang Z, Xu X, Wu D, Liao K, Wu Y, Chen J, Xu Y, Zhao L, Vasung L, Menze B, Cuadra MB, Jakab A. Fetal brain tissue annotation and segmentation challenge results. Med Image Anal 2023; 88:102833. [PMID: 37267773 DOI: 10.1016/j.media.2023.102833] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 03/16/2023] [Accepted: 04/20/2023] [Indexed: 06/04/2023]
Abstract
In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain. Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context. However, manual segmentation of cerebral structures is time-consuming and prone to error and inter-observer variability. Therefore, we organized the Fetal Tissue Annotation (FeTA) Challenge in 2021 in order to encourage the development of automatic segmentation algorithms on an international level. The challenge utilized FeTA Dataset, an open dataset of fetal brain MRI reconstructions segmented into seven different tissues (external cerebrospinal fluid, gray matter, white matter, ventricles, cerebellum, brainstem, deep gray matter). 20 international teams participated in this challenge, submitting a total of 21 algorithms for evaluation. In this paper, we provide a detailed analysis of the results from both a technical and clinical perspective. All participants relied on deep learning methods, mainly U-Nets, with some variability present in the network architecture, optimization, and image pre- and post-processing. The majority of teams used existing medical imaging deep learning frameworks. The main differences between the submissions were the fine tuning done during training, and the specific pre- and post-processing steps performed. The challenge results showed that almost all submissions performed similarly. Four of the top five teams used ensemble learning methods. However, one team's algorithm performed significantly superior to the other submissions, and consisted of an asymmetrical U-Net network architecture. This paper provides a first of its kind benchmark for future automatic multi-tissue segmentation algorithms for the developing human brain in utero.
Collapse
Affiliation(s)
- Kelly Payette
- Center for MR Research, University Children's Hospital Zurich, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich, Zurich, Switzerland.
| | - Hongwei Bran Li
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland; Department of Informatics, Technical University of Munich, Munich, Germany
| | - Priscille de Dumast
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; CIBM, Center for Biomedical Imaging, Lausanne, Switzerland
| | - Roxane Licandro
- Laboratory for Computational Neuroimaging, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital/Harvard Medical School, Charlestown, MA, United States; Department of Biomedical Imaging and Image-guided Therapy, Computational Imaging Research Lab (CIR), Medical University of Vienna, Vienna, Austria
| | - Hui Ji
- Center for MR Research, University Children's Hospital Zurich, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich, Zurich, Switzerland
| | | | | | | | - Hao Liu
- Shanghai Jiaotong University, China
| | | | | | - Ying Peng
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Juanying Xie
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Huiquan Zhang
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Guiming Dong
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Hao Fu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - ZunHyan Rieu
- Research Institute, NEUROPHET Inc., Seoul 06247, South Korea
| | - Donghyeon Kim
- Research Institute, NEUROPHET Inc., Seoul 06247, South Korea
| | - Hyun Gi Kim
- Department of Radiology, The Catholic University of Korea, Eunpyeong St. Mary's Hospital, Seoul 06247, South Korea
| | - Davood Karimi
- Boston Children's Hospital and Harvard Medical School, Boston, MA, United States
| | - Ali Gholipour
- Boston Children's Hospital and Harvard Medical School, Boston, MA, United States
| | - Helena R Torres
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal; Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga Guimarães, Portugal
| | - Bruno Oliveira
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal; Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga Guimarães, Portugal
| | - João L Vilaça
- 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Yang Lin
- Department of Computer Science, Hong Kong University of Science and Technology, China
| | - Netanell Avisdris
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel; Sagol Brain Institute, Tel Aviv Sourasky Medical Center, Israel
| | - Ori Ben-Zvi
- Sagol Brain Institute, Tel Aviv Sourasky Medical Center, Israel; Sagol School of Neuroscience, Tel Aviv University, Israel
| | - Dafna Ben Bashat
- Sagol School of Neuroscience, Tel Aviv University, Israel; Sackler Faculty of Medicine, Tel Aviv University, Israel
| | - Lucas Fidon
- School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EU, United Kingdom
| | - Michael Aertsen
- Department of Radiology, University Hospitals Leuven, Leuven 3000, Belgium
| | - Tom Vercauteren
- School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EU, United Kingdom
| | - Daniel Sobotka
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Georg Langs
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Mireia Alenyà
- BCN-MedTech, Department of Information and Communications Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Maria Inmaculada Villanueva
- Department of Information and Communications Technologies, Universitat Pompeu Fabra, Barcelona, Spain; Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain
| | - Oscar Camara
- BCN-MedTech, Department of Information and Communications Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Bella Specktor Fadida
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel
| | - Leo Joskowicz
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Israel
| | - Liao Weibin
- School of Computer Science, Beijing Institute of Technology, China
| | - Lv Yi
- School of Computer Science, Beijing Institute of Technology, China
| | - Li Xuesong
- School of Computer Science, Beijing Institute of Technology, China
| | - Moona Mazher
- Department of Computer Engineering and Mathematics, University Rovira i Virgili,Spain
| | | | - Domenec Puig
- Department of Computer Engineering and Mathematics, University Rovira i Virgili,Spain
| | - Hamza Kebiri
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; CIBM, Center for Biomedical Imaging, Lausanne, Switzerland
| | - Zelin Zhang
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Yuquan Campus, Hangzhou, China
| | - Xinyi Xu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Yuquan Campus, Hangzhou, China
| | - Dan Wu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Yuquan Campus, Hangzhou, China
| | | | - Yixuan Wu
- Zhejiang University, Hangzhou, China
| | | | - Yunzhi Xu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Yuquan Campus, Hangzhou, China
| | - Li Zhao
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Yuquan Campus, Hangzhou, China
| | - Lana Vasung
- Division of Newborn Medicine, Department of Pediatrics, Boston Children's Hospital, United States; Department of Pediatrics, Harvard Medical School, United States
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Meritxell Bach Cuadra
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; CIBM, Center for Biomedical Imaging, Lausanne, Switzerland
| | - Andras Jakab
- Center for MR Research, University Children's Hospital Zurich, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich, Zurich, Switzerland; University Research Priority Project Adaptive Brain Circuits in Development and Learning (AdaBD), University of Zürich, Zurich, Switzerland
| |
Collapse
|
11
|
Li J, Ellis DG, Kodym O, Rauschenbach L, Rieß C, Sure U, Wrede KH, Alvarez CM, Wodzinski M, Daniol M, Hemmerling D, Mahdi H, Clement A, Kim E, Fishman Z, Whyne CM, Mainprize JG, Hardisty MR, Pathak S, Sindhura C, Gorthi RKSS, Kiran DV, Gorthi S, Yang B, Fang K, Li X, Kroviakov A, Yu L, Jin Y, Pepe A, Gsaxner C, Herout A, Alves V, Španěl M, Aizenberg MR, Kleesiek J, Egger J. Towards clinical applicability and computational efficiency in automatic cranial implant design: An overview of the AutoImplant 2021 cranial implant design challenge. Med Image Anal 2023; 88:102865. [PMID: 37331241 DOI: 10.1016/j.media.2023.102865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 05/23/2023] [Accepted: 06/02/2023] [Indexed: 06/20/2023]
Abstract
Cranial implants are commonly used for surgical repair of craniectomy-induced skull defects. These implants are usually generated offline and may require days to weeks to be available. An automated implant design process combined with onsite manufacturing facilities can guarantee immediate implant availability and avoid secondary intervention. To address this need, the AutoImplant II challenge was organized in conjunction with MICCAI 2021, catering for the unmet clinical and computational requirements of automatic cranial implant design. The first edition of AutoImplant (AutoImplant I, 2020) demonstrated the general capabilities and effectiveness of data-driven approaches, including deep learning, for a skull shape completion task on synthetic defects. The second AutoImplant challenge (i.e., AutoImplant II, 2021) built upon the first by adding real clinical craniectomy cases as well as additional synthetic imaging data. The AutoImplant II challenge consisted of three tracks. Tracks 1 and 3 used skull images with synthetic defects to evaluate the ability of submitted approaches to generate implants that recreate the original skull shape. Track 3 consisted of the data from the first challenge (i.e., 100 cases for training, and 110 for evaluation), and Track 1 provided 570 training and 100 validation cases aimed at evaluating skull shape completion algorithms at diverse defect patterns. Track 2 also made progress over the first challenge by providing 11 clinically defective skulls and evaluating the submitted implant designs on these clinical cases. The submitted designs were evaluated quantitatively against imaging data from post-craniectomy as well as by an experienced neurosurgeon. Submissions to these challenge tasks made substantial progress in addressing issues such as generalizability, computational efficiency, data augmentation, and implant refinement. This paper serves as a comprehensive summary and comparison of the submissions to the AutoImplant II challenge. Codes and models are available at https://github.com/Jianningli/Autoimplant_II.
Collapse
Affiliation(s)
- Jianning Li
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria.
| | - David G Ellis
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Oldřich Kodym
- Graph@FIT, Brno University of Technology, Brno, Czech Republic
| | - Laurèl Rauschenbach
- Department of Neurosurgery and Spine Surgery, University Hospital Essen, Hufelandstrasse 55, 45147 Essen, Germany
| | - Christoph Rieß
- Department of Neurosurgery and Spine Surgery, University Hospital Essen, Hufelandstrasse 55, 45147 Essen, Germany
| | - Ulrich Sure
- Department of Neurosurgery and Spine Surgery, University Hospital Essen, Hufelandstrasse 55, 45147 Essen, Germany
| | - Karsten H Wrede
- Department of Neurosurgery and Spine Surgery, University Hospital Essen, Hufelandstrasse 55, 45147 Essen, Germany
| | - Carlos M Alvarez
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Marek Wodzinski
- AGH University of Science and Technology, Department of Measurement and Electronics, Krakow, Poland; University of Applied Sciences Western Switzerland (HES-SO Valais), Information Systems Institute, Sierre, Switzerland
| | - Mateusz Daniol
- AGH University of Science and Technology, Department of Measurement and Electronics, Krakow, Poland
| | - Daria Hemmerling
- AGH University of Science and Technology, Department of Measurement and Electronics, Krakow, Poland
| | - Hamza Mahdi
- Sunnybrook Research Institute, Toronto, ON, Canada
| | | | - Evan Kim
- Sunnybrook Research Institute, Toronto, ON, Canada
| | | | - Cari M Whyne
- Sunnybrook Research Institute, Toronto, ON, Canada; Division of Orthopaedic Surgery, University of Toronto, Toronto, ON, M5T 1P5, Canada
| | - James G Mainprize
- Sunnybrook Research Institute, Toronto, ON, Canada; Calavera Surgical Design Inc., Toronto, ON, Canada
| | - Michael R Hardisty
- Sunnybrook Research Institute, Toronto, ON, Canada; Division of Orthopaedic Surgery, University of Toronto, Toronto, ON, M5T 1P5, Canada
| | - Shashwat Pathak
- Department of Electrical Engineering, Indian Institute of Technology, Tirupati, India
| | - Chitimireddy Sindhura
- Department of Electrical Engineering, Indian Institute of Technology, Tirupati, India
| | | | - Degala Venkata Kiran
- Department of Mechanical Engineering, Indian Institute of Technology, Tirupati, India
| | - Subrahmanyam Gorthi
- Department of Electrical Engineering, Indian Institute of Technology, Tirupati, India
| | - Bokai Yang
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Ke Fang
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Xingyu Li
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Artem Kroviakov
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria
| | - Lei Yu
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Adam Herout
- Graph@FIT, Brno University of Technology, Brno, Czech Republic
| | - Victor Alves
- ALGORITMI Research Centre/LASI, University of Minho, Braga, Portugal
| | | | - Michele R Aizenberg
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria.
| |
Collapse
|
12
|
Valente S, Morais P, Torres HR, Oliveira B, Buschle LR, Fritz A, Correia-Pinto J, Lima E, Vilaca JL. A Comparative Study of Deep Learning Methods for Multi-Class Semantic Segmentation of 2D Kidney Ultrasound Images. Annu Int Conf IEEE Eng Med Biol Soc 2023; 2023:1-4. [PMID: 38083246 DOI: 10.1109/embc40787.2023.10341170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Ultrasound (US) imaging is a widely used medical imaging modality for the diagnosis, monitoring, and surgical planning for kidney conditions. Thus, accurate segmentation of the kidney and internal structures in US images is essential for the assessment of kidney function and the detection of pathological conditions, such as cysts, tumors, and kidney stones. Therefore, there is a need for automated methods that can accurately segment the kidney and internal structures in US images. Over the years, automatic strategies were proposed for such purpose, with deep learning methods achieving the current state-of-the-art results. However, these strategies typically ignore the segmentation of the internal structures of the kidney. Moreover, they were evaluated in different private datasets, hampering the direct comparison of results, and making it difficult to determination the optimal strategy for this task. In this study, we perform a comparative analysis of 7 deep learning networks for the segmentation of the kidney and internal structures (Capsule, Central Echogenic Complex (CEC), Cortex and Medulla) in 2D US images in an open access multi-class kidney US dataset. The dataset includes 514 images, acquired in multiple clinical centers using different US machines and protocols. The dataset contains the annotation of two experts, but 321 images with complete segmentation of all 4 classes were used. Overall, the results demonstrate that the DeepLabV3+ network outperformed the inter-rater variability with a Dice score of 78.0% compared to 75.6% for inter-rater variability. Specifically, DeepLabV3Plus achieved mean Dice scores of 94.2% for the Capsule, 85.8% for the CEC, 62.4% for the Cortex, and 69.6% for the Medulla. These findings suggest the potential of deep learning-based methods in improving the accuracy of kidney segmentation in US images.Clinical Relevance- This study shows the potential of DL for improving accuracy of kidney segmentation in US, leading to increased diagnostic efficiency, and enabling new applications such as computer-aided diagnosis and treatment, ultimately resulting in improved patient outcomes and reduced healthcare costs.1.
Collapse
|
13
|
Clunie DA, Flanders A, Taylor A, Erickson B, Bialecki B, Brundage D, Gutman D, Prior F, Seibert JA, Perry J, Gichoya JW, Kirby J, Andriole K, Geneslaw L, Moore S, Fitzgerald TJ, Tellis W, Xiao Y, Farahani K, Luo J, Rosenthal A, Kandarpa K, Rosen R, Goetz K, Babcock D, Xu B, Hsiao J. Report of the Medical Image De-Identification (MIDI) Task Group - Best Practices and Recommendations. ArXiv 2023:arXiv:2303.10473v2. [PMID: 37033463 PMCID: PMC10081345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Grants] [Subscribe] [Scholar Register] [Indexed: 04/11/2023]
Affiliation(s)
| | | | | | | | | | | | | | - Fred Prior
- University of Arkansas for Medical Sciences
| | | | | | | | - Justin Kirby
- Frederick National Laboratory for Cancer Research
| | | | | | | | | | | | - Ying Xiao
- University of Pennsylvania Health System
| | | | - James Luo
- National Heart, Lung, and Blood Institute (NHLBI)
| | - Alex Rosenthal
- National Institute of Allergy and Infectious Diseases (NIAID)
| | - Kris Kandarpa
- National Institute of Biomedical Imaging and Bioengineering (NIBIB)
| | - Rebecca Rosen
- Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD)
| | | | - Debra Babcock
- National Institute of Neurological Disorders and Stroke (NINDS)
| | - Ben Xu
- National Institute on Alcohol Abuse and Alcoholism (NIAAA)
| | | |
Collapse
|
14
|
Andrearczyk V, Oreiller V, Abobakr M, Akhavanallaf A, Balermpas P, Boughdad S, Capriotti L, Castelli J, Le Rest CC, Decazes P, Correia R, El-Habashy D, Elhalawani H, Fuller CD, Jreige M, Khamis Y, La Greca A, Mohamed A, Naser M, Prior JO, Ruan S, Tanadini-Lang S, Tankyevych O, Salimi Y, Vallières M, Vera P, Visvikis D, Wahid K, Zaidi H, Hatt M, Depeursinge A. Overview of the HECKTOR Challenge at MICCAI 2022: Automatic Head and Neck Tumor Segmentation and Outcome Prediction in PET/CT. Head Neck Tumor Chall (2022) 2023; 13626:1-30. [PMID: 37195050 PMCID: PMC10171217 DOI: 10.1007/978-3-031-27420-6_1] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
This paper presents an overview of the third edition of the HEad and neCK TumOR segmentation and outcome prediction (HECKTOR) challenge, organized as a satellite event of the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2022. The challenge comprises two tasks related to the automatic analysis of FDG-PET/CT images for patients with Head and Neck cancer (H&N), focusing on the oropharynx region. Task 1 is the fully automatic segmentation of H&N primary Gross Tumor Volume (GTVp) and metastatic lymph nodes (GTVn) from FDG-PET/CT images. Task 2 is the fully automatic prediction of Recurrence-Free Survival (RFS) from the same FDG-PET/CT and clinical data. The data were collected from nine centers for a total of 883 cases consisting of FDG-PET/CT images and clinical information, split into 524 training and 359 test cases. The best methods obtained an aggregated Dice Similarity Coefficient (DSCagg) of 0.788 in Task 1, and a Concordance index (C-index) of 0.682 in Task 2.
Collapse
Affiliation(s)
- Vincent Andrearczyk
- Institute of Informatics, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland
| | - Valentin Oreiller
- Institute of Informatics, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital (CHUV), Rue du Bugnon 46, 1011 Lausanne, Switzerland
| | - Moamen Abobakr
- The University of Texas MD Anderson Cancer Center, Houston, USA
| | | | | | - Sarah Boughdad
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital (CHUV), Rue du Bugnon 46, 1011 Lausanne, Switzerland
| | - Leo Capriotti
- Center Henri Becquerel, LITIS Laboratory, University of Rouen Normandy, Rouen, France
| | - Joel Castelli
- Radiotherapy Department, Cancer Institute Eugène Marquis, Rennes, France
- INSERM, U1099, Rennes, France
- University of Rennes 1, LTSI, Rennes, France
| | - Catherine Cheze Le Rest
- Centre Hospitalier Universitaire de Poitiers (CHUP), Poitiers, France
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France
| | - Pierre Decazes
- Center Henri Becquerel, LITIS Laboratory, University of Rouen Normandy, Rouen, France
| | - Ricardo Correia
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital (CHUV), Rue du Bugnon 46, 1011 Lausanne, Switzerland
| | - Dina El-Habashy
- The University of Texas MD Anderson Cancer Center, Houston, USA
| | - Hesham Elhalawani
- Cleveland Clinic Foundation, Department of Radiation Oncology, Cleveland, OH, USA
| | | | - Mario Jreige
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital (CHUV), Rue du Bugnon 46, 1011 Lausanne, Switzerland
| | - Yornna Khamis
- The University of Texas MD Anderson Cancer Center, Houston, USA
| | | | | | - Mohamed Naser
- The University of Texas MD Anderson Cancer Center, Houston, USA
| | - John O Prior
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital (CHUV), Rue du Bugnon 46, 1011 Lausanne, Switzerland
| | - Su Ruan
- Center Henri Becquerel, LITIS Laboratory, University of Rouen Normandy, Rouen, France
| | | | - Olena Tankyevych
- Centre Hospitalier Universitaire de Poitiers (CHUP), Poitiers, France
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France
| | | | - Martin Vallières
- Department of Computer Science, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Pierre Vera
- Center Henri Becquerel, LITIS Laboratory, University of Rouen Normandy, Rouen, France
| | | | - Kareem Wahid
- The University of Texas MD Anderson Cancer Center, Houston, USA
| | - Habib Zaidi
- Geneva University Hospital, Geneva, Switzerland
| | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France
| | - Adrien Depeursinge
- Institute of Informatics, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital (CHUV), Rue du Bugnon 46, 1011 Lausanne, Switzerland
| |
Collapse
|
15
|
Rädsch T, Reinke A, Weru V, Tizabi MD, Schreck N, Kavur AE, Pekdemir B, Roß T, Kopp-Schneider A, Maier-Hein L. Labelling instructions matter in biomedical image analysis. NAT MACH INTELL 2023. [DOI: 10.1038/s42256-023-00625-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
AbstractBiomedical image analysis algorithm validation depends on high-quality annotation of reference datasets, for which labelling instructions are key. Despite their importance, their optimization remains largely unexplored. Here we present a systematic study of labelling instructions and their impact on annotation quality in the field. Through comprehensive examination of professional practice and international competitions registered at the Medical Image Computing and Computer Assisted Intervention Society, the largest international society in the biomedical imaging field, we uncovered a discrepancy between annotators’ needs for labelling instructions and their current quality and availability. On the basis of an analysis of 14,040 images annotated by 156 annotators from four professional annotation companies and 708 Amazon Mechanical Turk crowdworkers using instructions with different information density levels, we further found that including exemplary images substantially boosts annotation performance compared with text-only descriptions, while solely extending text descriptions does not. Finally, professional annotators constantly outperform Amazon Mechanical Turk crowdworkers. Our study raises awareness for the need of quality standards in biomedical image analysis labelling instructions.
Collapse
|
16
|
Hering A, Hansen L, Mok TCW, Chung ACS, Siebert H, Hager S, Lange A, Kuckertz S, Heldmann S, Shao W, Vesal S, Rusu M, Sonn G, Estienne T, Vakalopoulou M, Han L, Huang Y, Yap PT, Brudfors M, Balbastre Y, Joutard S, Modat M, Lifshitz G, Raviv D, Lv J, Li Q, Jaouen V, Visvikis D, Fourcade C, Rubeaux M, Pan W, Xu Z, Jian B, De Benetti F, Wodzinski M, Gunnarsson N, Sjolund J, Grzech D, Qiu H, Li Z, Thorley A, Duan J, Grosbrohmer C, Hoopes A, Reinertsen I, Xiao Y, Landman B, Huo Y, Murphy K, Lessmann N, van Ginneken B, Dalca AV, Heinrich MP. Learn2Reg: Comprehensive Multi-Task Medical Image Registration Challenge, Dataset and Evaluation in the Era of Deep Learning. IEEE Trans Med Imaging 2023; 42:697-712. [PMID: 36264729 DOI: 10.1109/tmi.2022.3213983] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
Collapse
|
17
|
Roß T, Bruno P, Reinke A, Wiesenfarth M, Koeppel L, Full PM, Pekdemir B, Godau P, Trofimova D, Isensee F, Adler TJ, Tran TN, Moccia S, Calimeri F, Müller-Stich BP, Kopp-Schneider A, Maier-Hein L. Beyond rankings: Learning (more) from algorithm validation. Med Image Anal 2023; 86:102765. [PMID: 36965252 DOI: 10.1016/j.media.2023.102765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 05/24/2022] [Accepted: 02/08/2023] [Indexed: 03/06/2023]
Abstract
Challenges have become the state-of-the-art approach to benchmark image analysis algorithms in a comparative manner. While the validation on identical data sets was a great step forward, results analysis is often restricted to pure ranking tables, leaving relevant questions unanswered. Specifically, little effort has been put into the systematic investigation on what characterizes images in which state-of-the-art algorithms fail. To address this gap in the literature, we (1) present a statistical framework for learning from challenges and (2) instantiate it for the specific task of instrument instance segmentation in laparoscopic videos. Our framework relies on the semantic meta data annotation of images, which serves as foundation for a General Linear Mixed Models (GLMM) analysis. Based on 51,542 meta data annotations performed on 2,728 images, we applied our approach to the results of the Robust Medical Instrument Segmentation Challenge (ROBUST-MIS) challenge 2019 and revealed underexposure, motion and occlusion of instruments as well as the presence of smoke or other objects in the background as major sources of algorithm failure. Our subsequent method development, tailored to the specific remaining issues, yielded a deep learning model with state-of-the-art overall performance and specific strengths in the processing of images in which previous methods tended to fail. Due to the objectivity and generic applicability of our approach, it could become a valuable tool for validation in the field of medical image analysis and beyond.
Collapse
Affiliation(s)
- Tobias Roß
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany; Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| | - Pierangela Bruno
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; Department of Mathematics and Computer Science, University of Calabria, Rende, Italy
| | - Annika Reinke
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Germany
| | - Manuel Wiesenfarth
- Division of Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Lisa Koeppel
- Section Clinical Tropical Medicine, Heidelberg University, Heidelberg, Germany
| | - Peter M Full
- Medical Faculty, Heidelberg University, Heidelberg, Germany; Division of Medical Image Computing (MIC), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Bünyamin Pekdemir
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Patrick Godau
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Germany
| | - Darya Trofimova
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; HIP Applied Computer Vision Lab, MIC, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Fabian Isensee
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany; Division of Medical Image Computing (MIC), German Cancer Research Center (DKFZ), Heidelberg, Germany; HIP Applied Computer Vision Lab, MIC, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tim J Adler
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Thuy N Tran
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Italy
| | - Francesco Calimeri
- Department of Mathematics and Computer Science, University of Calabria, Rende, Italy
| | - Beat P Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | | | - Lena Maier-Hein
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany; Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Germany; Germany and National Center for Tumor Diseases (NCT), Heidelberg, Germany
| |
Collapse
|
18
|
Podobnik G, Strojan P, Peterlin P, Ibragimov B, Vrtovec T. HaN-Seg: The head and neck organ-at-risk CT and MR segmentation dataset. Med Phys 2023; 50:1917-1927. [PMID: 36594372 DOI: 10.1002/mp.16197] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 11/17/2022] [Accepted: 12/07/2022] [Indexed: 01/04/2023] Open
Abstract
PURPOSE For the cancer in the head and neck (HaN), radiotherapy (RT) represents an important treatment modality. Segmentation of organs-at-risk (OARs) is the starting point of RT planning, however, existing approaches are focused on either computed tomography (CT) or magnetic resonance (MR) images, while multimodal segmentation has not been thoroughly explored yet. We present a dataset of CT and MR images of the same patients with curated reference HaN OAR segmentations for an objective evaluation of segmentation methods. ACQUISITION AND VALIDATION METHODS The cohort consists of HaN images of 56 patients that underwent both CT and T1-weighted MR imaging for image-guided RT. For each patient, reference segmentations of up to 30 OARs were obtained by experts performing manual pixel-wise image annotation. By maintaining the distribution of patient age and gender, and annotation type, the patients were randomly split into training Set 1 (42 cases or 75%) and test Set 2 (14 cases or 25%). Baseline auto-segmentation results are also provided by training the publicly available deep nnU-Net architecture on Set 1, and evaluating its performance on Set 2. DATA FORMAT AND USAGE NOTES The data are publicly available through an open-access repository under the name HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Dataset. Images and reference segmentations are stored in the NRRD file format, where the OAR filenames correspond to the nomenclature recommended by the American Association of Physicists in Medicine, and OAR and demographics information is stored in separate comma-separated value files. POTENTIAL APPLICATIONS The HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge is launched in parallel with the dataset release to promote the development of automated techniques for OAR segmentation in the HaN. Other potential applications include out-of-challenge algorithm development and benchmarking, as well as external validation of the developed algorithms.
Collapse
Affiliation(s)
- Gašper Podobnik
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | | | | | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
19
|
Wagner M, Müller-Stich BP, Kisilenko A, Tran D, Heger P, Mündermann L, Lubotsky DM, Müller B, Davitashvili T, Capek M, Reinke A, Reid C, Yu T, Vardazaryan A, Nwoye CI, Padoy N, Liu X, Lee EJ, Disch C, Meine H, Xia T, Jia F, Kondo S, Reiter W, Jin Y, Long Y, Jiang M, Dou Q, Heng PA, Twick I, Kirtac K, Hosgor E, Bolmgren JL, Stenzel M, von Siemens B, Zhao L, Ge Z, Sun H, Xie D, Guo M, Liu D, Kenngott HG, Nickel F, Frankenberg MV, Mathis-Ullrich F, Kopp-Schneider A, Maier-Hein L, Speidel S, Bodenstedt S. Comparative validation of machine learning algorithms for surgical workflow and skill analysis with the HeiChole benchmark. Med Image Anal 2023; 86:102770. [PMID: 36889206 DOI: 10.1016/j.media.2023.102770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Revised: 02/03/2023] [Accepted: 02/08/2023] [Indexed: 02/23/2023]
Abstract
PURPOSE Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. METHODS To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. RESULTS F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). CONCLUSION Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery.
Collapse
Affiliation(s)
- Martin Wagner
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany.
| | - Beat-Peter Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Anna Kisilenko
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Duc Tran
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Patrick Heger
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
| | - Lars Mündermann
- Data Assisted Solutions, Corporate Research & Technology, KARL STORZ SE & Co. KG, Dr. Karl-Storz-Str. 34, 78332 Tuttlingen
| | - David M Lubotsky
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Benjamin Müller
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Tornike Davitashvili
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Manuela Capek
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Annika Reinke
- Div. Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; HIP Helmholtz Imaging Platform, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Im Neuenheimer Feld 205, 69120 Heidelberg
| | - Carissa Reid
- Division of Biostatistics, German Cancer Research Center, Im Neuenheimer Feld 280, Heidelberg, Germany
| | - Tong Yu
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Armine Vardazaryan
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Chinedu Innocent Nwoye
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, 111 Michigan Ave NW, Washington, DC 20010, USA
| | - Eung-Joo Lee
- University of Maryland, College Park, 2405 A V Williams Building, College Park, MD 20742, USA
| | - Constantin Disch
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Str. 2, 28359 Bremen, Germany
| | - Hans Meine
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Str. 2, 28359 Bremen, Germany; University of Bremen, FB3, Medical Image Computing Group, ℅ Fraunhofer MEVIS, Am Fallturm 1, 28359 Bremen, Germany
| | - Tong Xia
- Lab for Medical Imaging and Digital Surgery, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Fucang Jia
- Lab for Medical Imaging and Digital Surgery, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Satoshi Kondo
- Konika Minolta, Inc., 1-2, Sakura-machi, Takatsuki, Oasak 569-8503, Japan
| | - Wolfgang Reiter
- Wintegral GmbH, Ehrenbreitsteiner Str. 36, 80993 München, Germany
| | - Yueming Jin
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Yonghao Long
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Meirui Jiang
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Qi Dou
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Pheng Ann Heng
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Isabell Twick
- Caresyntax GmbH, Komturstr. 18A, 12099 Berlin, Germany
| | - Kadir Kirtac
- Caresyntax GmbH, Komturstr. 18A, 12099 Berlin, Germany
| | - Enes Hosgor
- Caresyntax GmbH, Komturstr. 18A, 12099 Berlin, Germany
| | | | | | | | - Long Zhao
- Hikvision Research Institute, Hangzhou, China
| | - Zhenxiao Ge
- Hikvision Research Institute, Hangzhou, China
| | - Haiming Sun
- Hikvision Research Institute, Hangzhou, China
| | - Di Xie
- Hikvision Research Institute, Hangzhou, China
| | - Mengqi Guo
- School of Computing, National University of Singapore, Computing 1, No.13 Computing Drive, 117417, Singapore
| | - Daochang Liu
- National Engineering Research Center of Visual Technology, School of Computer Science, Peking University, Beijing, China
| | - Hannes G Kenngott
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
| | - Felix Nickel
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
| | - Moritz von Frankenberg
- Department of Surgery, Salem Hospital of the Evangelische Stadtmission Heidelberg, Zeppelinstrasse 11-33, 69121 Heidelberg, Germany
| | - Franziska Mathis-Ullrich
- Health Robotics and Automation Laboratory, Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Geb. 40.28, KIT Campus Süd, Engler-Bunte-Ring 8, 76131 Karlsruhe, Germany
| | - Annette Kopp-Schneider
- Division of Biostatistics, German Cancer Research Center, Im Neuenheimer Feld 280, Heidelberg, Germany
| | - Lena Maier-Hein
- Div. Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; HIP Helmholtz Imaging Platform, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Im Neuenheimer Feld 205, 69120 Heidelberg; Medical Faculty, Heidelberg University, Im Neuenheimer Feld 672, 69120 Heidelberg
| | - Stefanie Speidel
- Div. Translational Surgical Oncology, National Center for Tumor Diseases Dresden, Fetscherstraße 74, 01307 Dresden, Germany; Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI) of Technische Universität Dresden, 01062 Dresden, Germany
| | - Sebastian Bodenstedt
- Div. Translational Surgical Oncology, National Center for Tumor Diseases Dresden, Fetscherstraße 74, 01307 Dresden, Germany; Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI) of Technische Universität Dresden, 01062 Dresden, Germany
| |
Collapse
|
20
|
Thomas C, Byra M, Marti R, Yap MH, Zwiggelaar R. BUS-Set: A benchmark for quantitative evaluation of breast ultrasound segmentation networks with public datasets. Med Phys 2023; 50:3223-3243. [PMID: 36794706 DOI: 10.1002/mp.16287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 12/30/2022] [Accepted: 12/30/2022] [Indexed: 02/17/2023] Open
Abstract
PURPOSE BUS-Set is a reproducible benchmark for breast ultrasound (BUS) lesion segmentation, comprising of publicly available images with the aim of improving future comparisons between machine learning models within the field of BUS. METHOD Four publicly available datasets were compiled creating an overall set of 1154 BUS images, from five different scanner types. Full dataset details have been provided, which include clinical labels and detailed annotations. Furthermore, nine state-of-the-art deep learning architectures were selected to form the initial benchmark segmentation result, tested using five-fold cross-validation and MANOVA/ANOVA with Tukey statistical significance test with a threshold of 0.01. Additional evaluation of these architectures was conducted, exploring possible training bias, and lesion size and type effects. RESULTS Of the nine state-of-the-art benchmarked architectures, Mask R-CNN obtained the highest overall results, with the following mean metric scores: Dice score of 0.851, intersection over union of 0.786 and pixel accuracy of 0.975. MANOVA/ANOVA and Tukey test results showed Mask R-CNN to be statistically significant better compared to all other benchmarked models with a p-value >0.01. Moreover, Mask R-CNN achieved the highest mean Dice score of 0.839 on an additional 16 image dataset, that contained multiple lesions per image. Further analysis on regions of interest was conducted, assessing Hamming distance, depth-to-width ratio (DWR), circularity, and elongation, which showed that the Mask R-CNN's segmentations maintained the most morphological features with correlation coefficients of 0.888, 0.532, 0.876 for DWR, circularity, and elongation, respectively. Based on the correlation coefficients, statistical test indicated that Mask R-CNN was only significantly different to Sk-U-Net. CONCLUSIONS BUS-Set is a fully reproducible benchmark for BUS lesion segmentation obtained through the use of public datasets and GitHub. Of the state-of-the-art convolution neural network (CNN)-based architectures, Mask R-CNN achieved the highest performance overall, further analysis indicated that a training bias may have occurred due to the lesion size variation in the dataset. All dataset and architecture details are available at GitHub: https://github.com/corcor27/BUS-Set, which allows for a fully reproducible benchmark.
Collapse
Affiliation(s)
- Cory Thomas
- Department of Computer Science, Aberystwyth University, Aberystwyth, UK
| | - Michal Byra
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland.,Department of Radiology, University of California, San Diego, California, USA
| | - Robert Marti
- Computer Vision and Robotics Institute, University of Girona, Girona, Spain
| | - Moi Hoon Yap
- Department of Computing and Mathematics, Manchester Metropolitan University, Manchester, UK
| | - Reyer Zwiggelaar
- Department of Computer Science, Aberystwyth University, Aberystwyth, UK
| |
Collapse
|
21
|
Bilic P, Christ P, Li HB, Vorontsov E, Ben-Cohen A, Kaissis G, Szeskin A, Jacobs C, Mamani GEH, Chartrand G, Lohöfer F, Holch JW, Sommer W, Hofmann F, Hostettler A, Lev-Cohain N, Drozdzal M, Amitai MM, Vivanti R, Sosna J, Ezhov I, Sekuboyina A, Navarro F, Kofler F, Paetzold JC, Shit S, Hu X, Lipková J, Rempfler M, Piraud M, Kirschke J, Wiestler B, Zhang Z, Hülsemeyer C, Beetz M, Ettlinger F, Antonelli M, Bae W, Bellver M, Bi L, Chen H, Chlebus G, Dam EB, Dou Q, Fu CW, Georgescu B, Giró-I-Nieto X, Gruen F, Han X, Heng PA, Hesser J, Moltz JH, Igel C, Isensee F, Jäger P, Jia F, Kaluva KC, Khened M, Kim I, Kim JH, Kim S, Kohl S, Konopczynski T, Kori A, Krishnamurthi G, Li F, Li H, Li J, Li X, Lowengrub J, Ma J, Maier-Hein K, Maninis KK, Meine H, Merhof D, Pai A, Perslev M, Petersen J, Pont-Tuset J, Qi J, Qi X, Rippel O, Roth K, Sarasua I, Schenk A, Shen Z, Torres J, Wachinger C, Wang C, Weninger L, Wu J, Xu D, Yang X, Yu SCH, Yuan Y, Yue M, Zhang L, Cardoso J, Bakas S, Braren R, Heinemann V, Pal C, Tang A, Kadoury S, Soler L, van Ginneken B, Greenspan H, Joskowicz L, Menze B. The Liver Tumor Segmentation Benchmark (LiTS). Med Image Anal 2023; 84:102680. [PMID: 36481607 PMCID: PMC10631490 DOI: 10.1016/j.media.2022.102680] [Citation(s) in RCA: 61] [Impact Index Per Article: 61.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 09/27/2022] [Accepted: 10/29/2022] [Indexed: 11/18/2022]
Abstract
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.
Collapse
Affiliation(s)
- Patrick Bilic
- Department of Informatics, Technical University of Munich, Germany
| | - Patrick Christ
- Department of Informatics, Technical University of Munich, Germany
| | - Hongwei Bran Li
- Department of Informatics, Technical University of Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland.
| | | | - Avi Ben-Cohen
- Department of Biomedical Engineering, Tel-Aviv University, Israel
| | - Georgios Kaissis
- Institute for AI in Medicine, Technical University of Munich, Germany; Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Computing, Imperial College London, London, United Kingdom
| | - Adi Szeskin
- School of Computer Science and Engineering, the Hebrew University of Jerusalem, Israel
| | - Colin Jacobs
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Gabriel Chartrand
- The University of Montréal Hospital Research Centre (CRCHUM) Montréal, Québec, Canada
| | - Fabian Lohöfer
- Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany
| | - Julian Walter Holch
- Department of Medicine III, University Hospital, LMU Munich, Munich, Germany; Comprehensive Cancer Center Munich, Munich, Germany; Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Wieland Sommer
- Department of Radiology, University Hospital, LMU Munich, Germany
| | - Felix Hofmann
- Department of General, Visceral and Transplantation Surgery, University Hospital, LMU Munich, Germany; Department of Radiology, University Hospital, LMU Munich, Germany
| | - Alexandre Hostettler
- Department of Surgical Data Science, Institut de Recherche contre les Cancers de l'Appareil Digestif (IRCAD), France
| | - Naama Lev-Cohain
- Department of Radiology, Hadassah University Medical Center, Jerusalem, Israel
| | | | | | | | - Jacob Sosna
- Department of Radiology, Hadassah University Medical Center, Jerusalem, Israel
| | - Ivan Ezhov
- Department of Informatics, Technical University of Munich, Germany
| | - Anjany Sekuboyina
- Department of Informatics, Technical University of Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland
| | - Fernando Navarro
- Department of Informatics, Technical University of Munich, Germany; Department of Radiation Oncology and Radiotherapy, Klinikum rechts der Isar, Technical University of Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany
| | - Florian Kofler
- Department of Informatics, Technical University of Munich, Germany; Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany; Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany
| | - Johannes C Paetzold
- Department of Computing, Imperial College London, London, United Kingdom; Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Zentrum München, Neuherberg, Germany
| | - Suprosanna Shit
- Department of Informatics, Technical University of Munich, Germany
| | - Xiaobin Hu
- Department of Informatics, Technical University of Munich, Germany
| | - Jana Lipková
- Brigham and Women's Hospital, Harvard Medical School, USA
| | - Markus Rempfler
- Department of Informatics, Technical University of Munich, Germany
| | - Marie Piraud
- Department of Informatics, Technical University of Munich, Germany; Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany
| | - Jan Kirschke
- Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany
| | - Benedikt Wiestler
- Institute for diagnostic and interventional neuroradiology, Klinikum rechts der Isar,Technical University of Munich, Germany
| | - Zhiheng Zhang
- Department of Hepatobiliary Surgery, the Affiliated Drum Tower Hospital of Nanjing University Medical School, China
| | | | - Marcel Beetz
- Department of Informatics, Technical University of Munich, Germany
| | | | - Michela Antonelli
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | | | | | - Lei Bi
- School of Computer Science, the University of Sydney, Australia
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, China
| | - Grzegorz Chlebus
- Fraunhofer MEVIS, Bremen, Germany; Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Erik B Dam
- Department of Computer Science, University of Copenhagen, Denmark
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Chi-Wing Fu
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | | | - Xavier Giró-I-Nieto
- Signal Theory and Communications Department, Universitat Politecnica de Catalunya, Catalonia, Spain
| | - Felix Gruen
- Institute of Control Engineering, Technische Universität Braunschweig, Germany
| | - Xu Han
- Department of computer science, UNC Chapel Hill, USA
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Jürgen Hesser
- Mannheim Institute for Intelligent Systems in Medicine, department of Medicine Mannheim, Heidelberg University, Germany; Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Germany; Central Institute for Computer Engineering (ZITI), Heidelberg University, Germany
| | | | - Christian Igel
- Department of Computer Science, University of Copenhagen, Denmark
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, Germany
| | - Paul Jäger
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, Germany
| | - Fucang Jia
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China
| | - Krishna Chaitanya Kaluva
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | - Mahendra Khened
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | | | - Jae-Hun Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, South Korea
| | | | - Simon Kohl
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tomasz Konopczynski
- Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Germany
| | - Avinash Kori
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | - Ganapathy Krishnamurthi
- Medical Imaging and Reconstruction Lab, Department of Engineering Design, Indian Institute of Technology Madras, India
| | - Fan Li
- Sensetime, Shanghai, China
| | - Hongchao Li
- Department of Computer Science, Guangdong University of Foreign Studies, China
| | - Junbo Li
- Philips Research China, Philips China Innovation Campus, Shanghai, China
| | - Xiaomeng Li
- Department of Electrical and Electronic Engineering, The University of Hong Kong, China
| | - John Lowengrub
- Departments of Mathematics, Biomedical Engineering, University of California, Irvine, USA; Center for Complex Biological Systems, University of California, Irvine, USA; Chao Family Comprehensive Cancer Center, University of California, Irvine, USA
| | - Jun Ma
- Department of Mathematics, Nanjing University of Science and Technology, China
| | - Klaus Maier-Hein
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, Germany
| | | | - Hans Meine
- Fraunhofer MEVIS, Bremen, Germany; Medical Image Computing Group, FB3, University of Bremen, Germany
| | - Dorit Merhof
- Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
| | - Akshay Pai
- Department of Computer Science, University of Copenhagen, Denmark
| | - Mathias Perslev
- Department of Computer Science, University of Copenhagen, Denmark
| | - Jens Petersen
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jordi Pont-Tuset
- Eidgenössische Technische Hochschule Zurich (ETHZ), Zurich, Switzerland
| | - Jin Qi
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, China
| | - Xiaojuan Qi
- Department of Electrical and Electronic Engineering, The University of Hong Kong, China
| | - Oliver Rippel
- Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
| | | | - Ignacio Sarasua
- Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-Universität, Munich, Germany
| | - Andrea Schenk
- Fraunhofer MEVIS, Bremen, Germany; Institute for Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
| | - Zengming Shen
- Beckman Institute, University of Illinois at Urbana-Champaign, USA; Siemens Healthineers, USA
| | - Jordi Torres
- Barcelona Supercomputing Center, Barcelona, Spain; Universitat Politecnica de Catalunya, Catalonia, Spain
| | - Christian Wachinger
- Department of Informatics, Technical University of Munich, Germany; Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-Universität, Munich, Germany
| | - Chunliang Wang
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Sweden
| | - Leon Weninger
- Institute of Imaging & Computer Vision, RWTH Aachen University, Germany
| | - Jianrong Wu
- Tencent Healthcare (Shenzhen) Co., Ltd, China
| | | | - Xiaoping Yang
- Department of Mathematics, Nanjing University, China
| | - Simon Chun-Ho Yu
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - Yading Yuan
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, NY, USA
| | - Miao Yue
- CGG Services (Singapore) Pte. Ltd., Singapore
| | - Liping Zhang
- Department of Imaging and Interventional Radiology, Chinese University of Hong Kong, Hong Kong, China
| | - Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, USA; Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, PA, USA
| | - Rickmer Braren
- German Cancer Consortium (DKTK), Germany; Institute for diagnostic and interventional radiology, Klinikum rechts der Isar, Technical University of Munich, Germany; Comprehensive Cancer Center Munich, Munich, Germany
| | - Volker Heinemann
- Department of Hematology/Oncology & Comprehensive Cancer Center Munich, LMU Klinikum Munich, Germany
| | | | - An Tang
- Department of Radiology, Radiation Oncology and Nuclear Medicine, University of Montréal, Canada
| | | | - Luc Soler
- Department of Surgical Data Science, Institut de Recherche contre les Cancers de l'Appareil Digestif (IRCAD), France
| | - Bram van Ginneken
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Hayit Greenspan
- Department of Biomedical Engineering, Tel-Aviv University, Israel
| | - Leo Joskowicz
- School of Computer Science and Engineering, the Hebrew University of Jerusalem, Israel
| | - Bjoern Menze
- Department of Informatics, Technical University of Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland
| |
Collapse
|
22
|
Foucart A, Debeir O, Decaestecker C. Shortcomings and areas for improvement in digital pathology image segmentation challenges. Comput Med Imaging Graph 2023; 103:102155. [PMID: 36525770 DOI: 10.1016/j.compmedimag.2022.102155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 09/13/2022] [Accepted: 11/27/2022] [Indexed: 12/13/2022]
Abstract
Digital pathology image analysis challenges have been organised regularly since 2010, often with events hosted at major conferences and results published in high-impact journals. These challenges mobilise a lot of energy from organisers, participants, and expert annotators (especially for image segmentation challenges). This study reviews image segmentation challenges in digital pathology and the top-ranked methods, with a particular focus on how reference annotations are generated and how the methods' predictions are evaluated. We found important shortcomings in the handling of inter-expert disagreement and the relevance of the evaluation process chosen. We also noted key problems with the quality control of various challenge elements that can lead to uncertainties in the published results. Our findings show the importance of greatly increasing transparency in the reporting of challenge results, and the need to make publicly available the evaluation codes, test set annotations and participants' predictions. The aim is to properly ensure the reproducibility and interpretation of the results and to increase the potential for exploitation of the substantial work done in these challenges.
Collapse
Affiliation(s)
- Adrien Foucart
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium.
| | - Olivier Debeir
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium
| | - Christine Decaestecker
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium.
| |
Collapse
|
23
|
Hirvasniemi J, Runhaar J, van der Heijden RA, Zokaeinikoo M, Yang M, Li X, Tan J, Rajamohan HR, Zhou Y, Deniz CM, Caliva F, Iriondo C, Lee JJ, Liu F, Martinez AM, Namiri N, Pedoia V, Panfilov E, Bayramoglu N, Nguyen HH, Nieminen MT, Saarakkala S, Tiulpin A, Lin E, Li A, Li V, Dam EB, Chaudhari AS, Kijowski R, Bierma-Zeinstra S, Oei EHG, Klein S. The KNee OsteoArthritis Prediction (KNOAP2020) challenge: An image analysis challenge to predict incident symptomatic radiographic knee osteoarthritis from MRI and X-ray images. Osteoarthritis Cartilage 2023; 31:115-125. [PMID: 36243308 DOI: 10.1016/j.joca.2022.10.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 09/02/2022] [Accepted: 10/03/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVES The KNee OsteoArthritis Prediction (KNOAP2020) challenge was organized to objectively compare methods for the prediction of incident symptomatic radiographic knee osteoarthritis within 78 months on a test set with blinded ground truth. DESIGN The challenge participants were free to use any available data sources to train their models. A test set of 423 knees from the Prevention of Knee Osteoarthritis in Overweight Females (PROOF) study consisting of magnetic resonance imaging (MRI) and X-ray image data along with clinical risk factors at baseline was made available to all challenge participants. The ground truth outcomes, i.e., which knees developed incident symptomatic radiographic knee osteoarthritis (according to the combined ACR criteria) within 78 months, were not provided to the participants. To assess the performance of the submitted models, we used the area under the receiver operating characteristic curve (ROCAUC) and balanced accuracy (BACC). RESULTS Seven teams submitted 23 entries in total. A majority of the algorithms were trained on data from the Osteoarthritis Initiative. The model with the highest ROCAUC (0.64 (95% confidence interval (CI): 0.57-0.70)) used deep learning to extract information from X-ray images combined with clinical variables. The model with the highest BACC (0.59 (95% CI: 0.52-0.65)) ensembled three different models that used automatically extracted X-ray and MRI features along with clinical variables. CONCLUSION The KNOAP2020 challenge established a benchmark for predicting incident symptomatic radiographic knee osteoarthritis. Accurate prediction of incident symptomatic radiographic knee osteoarthritis is a complex and still unsolved problem requiring additional investigation.
Collapse
Affiliation(s)
- J Hirvasniemi
- Department of Radiology & Nuclear Medicine, Erasmus MC University Medical Center, Rotterdam, the Netherlands.
| | - J Runhaar
- Department of General Practice, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - R A van der Heijden
- Department of Radiology & Nuclear Medicine, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - M Zokaeinikoo
- Department of Biomedical Engineering, Cleveland Clinic, Cleveland, USA
| | - M Yang
- Department of Biomedical Engineering, Cleveland Clinic, Cleveland, USA
| | - X Li
- Department of Biomedical Engineering, Cleveland Clinic, Cleveland, USA
| | - J Tan
- Department of Radiology, New York University Langone Health, New York, USA
| | - H R Rajamohan
- Department of Radiology, New York University Langone Health, New York, USA
| | - Y Zhou
- Department of Radiology, New York University Langone Health, New York, USA
| | - C M Deniz
- Department of Radiology, New York University Langone Health, New York, USA
| | - F Caliva
- Department of Radiology, University of California, San Francisco, San Francisco, USA
| | - C Iriondo
- Department of Radiology, University of California, San Francisco, San Francisco, USA
| | - J J Lee
- Department of Radiology, University of California, San Francisco, San Francisco, USA
| | - F Liu
- Department of Radiology, University of California, San Francisco, San Francisco, USA
| | - A M Martinez
- Department of Radiology, University of California, San Francisco, San Francisco, USA
| | - N Namiri
- Department of Radiology, University of California, San Francisco, San Francisco, USA
| | - V Pedoia
- Department of Radiology, University of California, San Francisco, San Francisco, USA
| | - E Panfilov
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - N Bayramoglu
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - H H Nguyen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - M T Nieminen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - S Saarakkala
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland; Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland
| | - A Tiulpin
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - E Lin
- Akousist Co., Ltd., Taoyuan City, Taiwan
| | - A Li
- Akousist Co., Ltd., Taoyuan City, Taiwan
| | - V Li
- Akousist Co., Ltd., Taoyuan City, Taiwan
| | - E B Dam
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - A S Chaudhari
- Department of Radiology, Stanford University, Stanford, USA
| | - R Kijowski
- Department of Radiology, New York University Langone Health, New York, USA
| | - S Bierma-Zeinstra
- Department of General Practice, Erasmus MC University Medical Center, Rotterdam, the Netherlands; Department of Orthopedics & Sport Medicine, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - E H G Oei
- Department of Radiology & Nuclear Medicine, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - S Klein
- Department of Radiology & Nuclear Medicine, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| |
Collapse
|
24
|
Dorent R, Kujawa A, Ivory M, Bakas S, Rieke N, Joutard S, Glocker B, Cardoso J, Modat M, Batmanghelich K, Belkov A, Calisto MB, Choi JW, Dawant BM, Dong H, Escalera S, Fan Y, Hansen L, Heinrich MP, Joshi S, Kashtanova V, Kim HG, Kondo S, Kruse CN, Lai-Yuen SK, Li H, Liu H, Ly B, Oguz I, Shin H, Shirokikh B, Su Z, Wang G, Wu J, Xu Y, Yao K, Zhang L, Ourselin S, Shapey J, Vercauteren T. CrossMoDA 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation. Med Image Anal 2023; 83:102628. [PMID: 36283200 DOI: 10.1016/j.media.2022.102628] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 06/17/2022] [Accepted: 09/10/2022] [Indexed: 02/04/2023]
Abstract
Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT1 scans (N=105) and unpaired non-annotated hrT2 scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score - VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score - VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.
Collapse
Affiliation(s)
- Reuben Dorent
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom.
| | - Aaron Kujawa
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Marina Ivory
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, USA; Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | | | - Samuel Joutard
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Ben Glocker
- Department of Computing, Imperial College London, Department of Computing, London, United Kingdom
| | - Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | | | - Arseniy Belkov
- Moscow Institute of Physics and Technology, Moscow, Russia
| | | | - Jae Won Choi
- Department of Radiology, Armed Forces Yangju Hospital, Yangju, Republic of Korea
| | | | - Hexin Dong
- Center for Data Science, Peking University, Beijing, China
| | - Sergio Escalera
- Artificial Intelligence in Medicine Lab (BCN-AIM) and Human Behavior Analysis Lab (HuPBA), Universitat de Barcelona, Barcelona, Spain
| | - Yubo Fan
- Vanderbilt University, Nashville, USA
| | - Lasse Hansen
- Institute of Medical Informatics, Universität zu Lübeck, Germany
| | | | - Smriti Joshi
- Artificial Intelligence in Medicine Lab (BCN-AIM) and Human Behavior Analysis Lab (HuPBA), Universitat de Barcelona, Barcelona, Spain
| | | | - Hyeon Gyu Kim
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | | | | | | | - Hao Li
- Vanderbilt University, Nashville, USA
| | - Han Liu
- Vanderbilt University, Nashville, USA
| | - Buntheng Ly
- Inria, Université Côte d'Azur, Sophia Antipolis, France
| | - Ipek Oguz
- Vanderbilt University, Nashville, USA
| | - Hyungseob Shin
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Boris Shirokikh
- Skolkovo Institute of Science and Technology, Moscow, Russia; Artificial Intelligence Research Institute (AIRI), Moscow, Russia
| | - Zixian Su
- University of Liverpool, Liverpool, United Kingdom; School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Jianghao Wu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yanwu Xu
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, USA
| | - Kai Yao
- University of Liverpool, Liverpool, United Kingdom; School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, China
| | - Li Zhang
- Center for Data Science, Peking University, Beijing, China
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Jonathan Shapey
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom; Department of Neurosurgery, King's College Hospital, London, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| |
Collapse
|
25
|
Ma J, Zhang Y, Gu S, An X, Wang Z, Ge C, Wang C, Zhang F, Wang Y, Xu Y, Gou S, Thaler F, Payer C, Štern D, Henderson EGA, McSweeney DM, Green A, Jackson P, McIntosh L, Nguyen QC, Qayyum A, Conze PH, Huang Z, Zhou Z, Fan DP, Xiong H, Dong G, Zhu Q, He J, Yang X. Fast and Low-GPU-memory abdomen CT organ segmentation: The FLARE challenge. Med Image Anal 2022; 82:102616. [PMID: 36179380 DOI: 10.1016/j.media.2022.102616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 06/26/2022] [Accepted: 09/02/2022] [Indexed: 11/27/2022]
Abstract
Automatic segmentation of abdominal organs in CT scans plays an important role in clinical practice. However, most existing benchmarks and datasets only focus on segmentation accuracy, while the model efficiency and its accuracy on the testing cases from different medical centers have not been evaluated. To comprehensively benchmark abdominal organ segmentation methods, we organized the first Fast and Low GPU memory Abdominal oRgan sEgmentation (FLARE) challenge, where the segmentation methods were encouraged to achieve high accuracy on the testing cases from different medical centers, fast inference speed, and low GPU memory consumption, simultaneously. The winning method surpassed the existing state-of-the-art method, achieving a 19× faster inference speed and reducing the GPU memory consumption by 60% with comparable accuracy. We provide a summary of the top methods, make their code and Docker containers publicly available, and give practical suggestions on building accurate and efficient abdominal organ segmentation models. The FLARE challenge remains open for future submissions through a live platform for benchmarking further methodology developments at https://flare.grand-challenge.org/.
Collapse
Affiliation(s)
- Jun Ma
- Department of Mathematics, Nanjing University of Science and Technology, 210094, Nanjing, China
| | - Yao Zhang
- Institute of Computing Technology, Chinese Academy of Sciences and the University of Chinese Academy of Sciences, 100019, Beijing, China
| | - Song Gu
- Department of Image Reconstruction, Nanjing Anke Medical Technology Co., Ltd., 211113, Nanjing, China
| | - Xingle An
- Infervision Technology Co. Ltd., 100020, Beijing, China
| | - Zhihe Wang
- Shenzhen Haichuang Medical Co., Ltd., 518049, Shenzhen, China
| | - Cheng Ge
- Institute of Bioinformatics and Medical Engineering, Jiangsu University of Technology, 213001, Changzhou, China
| | - Congcong Wang
- School of Computer Science and Engineering, Tianjin University of Technology, 300384, Tianjin, China; Engineering Research Center of Learning-Based Intelligent System, Ministry of Education, 300384, Tianjin, China
| | - Fan Zhang
- Radiological Algorithm, Fosun Aitrox Information Technology Co., Ltd., 200033, Shanghai, China
| | - Yu Wang
- Radiological Algorithm, Fosun Aitrox Information Technology Co., Ltd., 200033, Shanghai, China
| | - Yinan Xu
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, 710071, Shaanxi, China
| | - Shuiping Gou
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, 710071, Shaanxi, China
| | - Franz Thaler
- Gottfried Schatz Research Center: Biophysics, Medical University of Graz, 8010, Graz, Austria; Institute of Computer Graphics and Vision, Graz University of Technology, 8010, Graz, Austria
| | - Christian Payer
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010, Graz, Austria
| | - Darko Štern
- Gottfried Schatz Research Center: Biophysics, Medical University of Graz, 8010, Graz, Austria
| | - Edward G A Henderson
- Division of Cancer Sciences, The University of Manchester, M139PL, Manchester, UK; Radiotherapy Related Research, The Christie NHS Foundation Trust, M139PL, Manchester, UK
| | - Dónal M McSweeney
- Division of Cancer Sciences, The University of Manchester, M139PL, Manchester, UK; Radiotherapy Related Research, The Christie NHS Foundation Trust, M139PL, Manchester, UK
| | - Andrew Green
- Division of Cancer Sciences, The University of Manchester, M139PL, Manchester, UK; Radiotherapy Related Research, The Christie NHS Foundation Trust, M139PL, Manchester, UK
| | - Price Jackson
- Peter MacCallum Cancer Centre, 3000, Melbourne, Australia
| | | | - Quoc-Cuong Nguyen
- University of Information Technology, VNU-HCM, 700000, Ho Chi Minh City, Viet Nam
| | - Abdul Qayyum
- Brest National School of Engineering, UMR CNRS 6285 LabSTICC, 29280, Brest, France
| | | | - Ziyan Huang
- Institute of Medical Robotics, Shanghai Jiao Tong University, 200240, Shanghai, China
| | - Ziqi Zhou
- Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, 518000, Shenzhen, China
| | - Deng-Ping Fan
- College of Computer Science, Nankai University, 300071, Tianjin, China; Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Huan Xiong
- Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates; Harbin Institute of Technology, 150001, Harbin, China
| | - Guoqiang Dong
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, 210008, Nanjing, China; Department of Interventional Radiology, The Second Affiliated Hospital of Bengbu Medical College, 233017, Bengbu, China
| | - Qiongjie Zhu
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, 210008, Nanjing, China; Department of Radiology, Shidong Hospital, 200438, Shanghai, China
| | - Jian He
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, 210008, Nanjing, China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, 210093, Nanjing, China.
| |
Collapse
|
26
|
Antonelli M, Reinke A, Bakas S, Farahani K, Kopp-Schneider A, Landman BA, Litjens G, Menze B, Ronneberger O, Summers RM, van Ginneken B, Bilello M, Bilic P, Christ PF, Do RKG, Gollub MJ, Heckers SH, Huisman H, Jarnagin WR, McHugo MK, Napel S, Pernicka JSG, Rhode K, Tobon-Gomez C, Vorontsov E, Meakin JA, Ourselin S, Wiesenfarth M, Arbeláez P, Bae B, Chen S, Daza L, Feng J, He B, Isensee F, Ji Y, Jia F, Kim I, Maier-Hein K, Merhof D, Pai A, Park B, Perslev M, Rezaiifar R, Rippel O, Sarasua I, Shen W, Son J, Wachinger C, Wang L, Wang Y, Xia Y, Xu D, Xu Z, Zheng Y, Simpson AL, Maier-Hein L, Cardoso MJ. The Medical Segmentation Decathlon. Nat Commun 2022; 13:4128. [PMID: 35840566 PMCID: PMC9287542 DOI: 10.1038/s41467-022-30695-9] [Citation(s) in RCA: 115] [Impact Index Per Article: 57.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 05/13/2022] [Indexed: 02/05/2023] Open
Abstract
International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. We organized the Medical Segmentation Decathlon (MSD)-a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities to investigate the hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. MSD results confirmed this hypothesis, moreover, MSD winner continued generalizing well to a wide range of other clinical problems for the next two years. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to scientists that are not versed in AI model training.
Collapse
Affiliation(s)
- Michela Antonelli
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK.
| | - Annika Reinke
- Div. Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Heidelberg, Germany.,HI Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany.,Faculty of Mathematics and Computer Science, University of Heidelberg, Heidelberg, Germany
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.,Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute (NIH), Bethesda, MD, USA
| | | | - Bennett A Landman
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Geert Litjens
- Radboud University Medical Center, Radboud Institute for Health Sciences, Nijmegen, The Netherlands
| | - Bjoern Menze
- Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | | | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center (NIH), Bethesda, MD, USA
| | - Bram van Ginneken
- Radboud University Medical Center, Radboud Institute for Health Sciences, Nijmegen, The Netherlands
| | - Michel Bilello
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Patrick Bilic
- Department of Informatics, Technische Universität München, München, Germany
| | - Patrick F Christ
- Department of Informatics, Technische Universität München, München, Germany
| | - Richard K G Do
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Marc J Gollub
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Stephan H Heckers
- Department of Psychiatry & Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Henkjan Huisman
- Radboud University Medical Center, Radboud Institute for Health Sciences, Nijmegen, The Netherlands
| | - William R Jarnagin
- Department of Surgery, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Maureen K McHugo
- Department of Psychiatry & Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Sandy Napel
- Department of Radiology, Stanford University, Stanford, CA, USA
| | | | - Kawal Rhode
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Catalina Tobon-Gomez
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Eugene Vorontsov
- Department of Computer Science and Software Engineering, École Polytechnique de Montréal, Montréal, QC, Canada
| | - James A Meakin
- Radboud University Medical Center, Radboud Institute for Health Sciences, Nijmegen, The Netherlands
| | - Sebastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Manuel Wiesenfarth
- Div. Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | | | | | - Laura Daza
- Universidad de los Andes, Bogota, Colombia
| | - Jianjiang Feng
- Department of Automation, Tsinghua University, Beijing, China
| | - Baochun He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Fabian Isensee
- HI Applied Computer Vision Lab, Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Yuanfeng Ji
- Department of Computer Science, Xiamen University, Xiamen, China
| | - Fucang Jia
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ildoo Kim
- Kakao Brain, Seongnam-si, Republic of Korea
| | - Klaus Maier-Hein
- Cerebriu A/S, Copenhagen, Denmark.,Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Dorit Merhof
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany.,Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Akshay Pai
- Cerebriu A/S, Copenhagen, Denmark.,Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | | | - Mathias Perslev
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | | | - Oliver Rippel
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Ignacio Sarasua
- Lab for Artificial Intelligence in Medical Imaging (AI-Med), Department of Child and Adolescent Psychiatry, University Hospital, LMU München, Germany
| | - Wei Shen
- MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China
| | | | - Christian Wachinger
- Lab for Artificial Intelligence in Medical Imaging (AI-Med), Department of Child and Adolescent Psychiatry, University Hospital, LMU München, Germany
| | - Liansheng Wang
- Department of Computer Science, Xiamen University, Xiamen, China
| | - Yan Wang
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China
| | - Yingda Xia
- Johns Hopkins University, Baltimore, MD, USA
| | | | - Zhanwei Xu
- Department of Automation, Tsinghua University, Beijing, China
| | | | - Amber L Simpson
- School of Computing/Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada
| | - Lena Maier-Hein
- Div. Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Heidelberg, Germany.,HI Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany.,Faculty of Mathematics and Computer Science, University of Heidelberg, Heidelberg, Germany.,Medical Faculty, University of Heidelberg, Heidelberg, Germany
| | - M Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| |
Collapse
|
27
|
Egger J, Gsaxner C, Pepe A, Pomykala KL, Jonske F, Kurz M, Li J, Kleesiek J. Medical deep learning-A systematic meta-review. Comput Methods Programs Biomed 2022; 221:106874. [PMID: 35588660 DOI: 10.1016/j.cmpb.2022.106874] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 04/22/2022] [Accepted: 05/10/2022] [Indexed: 05/22/2023]
Abstract
Deep learning has remarkably impacted several different scientific disciplines over the last few years. For example, in image processing and analysis, deep learning algorithms were able to outperform other cutting-edge methods. Additionally, deep learning has delivered state-of-the-art results in tasks like autonomous driving, outclassing previous attempts. There are even instances where deep learning outperformed humans, for example with object recognition and gaming. Deep learning is also showing vast potential in the medical domain. With the collection of large quantities of patient records and data, and a trend towards personalized treatments, there is a great need for automated and reliable processing and analysis of health information. Patient data is not only collected in clinical centers, like hospitals and private practices, but also by mobile healthcare apps or online websites. The abundance of collected patient data and the recent growth in the deep learning field has resulted in a large increase in research efforts. In Q2/2020, the search engine PubMed returned already over 11,000 results for the search term 'deep learning', and around 90% of these publications are from the last three years. However, even though PubMed represents the largest search engine in the medical field, it does not cover all medical-related publications. Hence, a complete overview of the field of 'medical deep learning' is almost impossible to obtain and acquiring a full overview of medical sub-fields is becoming increasingly more difficult. Nevertheless, several review and survey articles about medical deep learning have been published within the last few years. They focus, in general, on specific medical scenarios, like the analysis of medical images containing specific pathologies. With these surveys as a foundation, the aim of this article is to provide the first high-level, systematic meta-review of medical deep learning surveys.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Department of Oral &Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147 Essen, Germany.
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Department of Oral &Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Frederic Jonske
- Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Manuel Kurz
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147 Essen, Germany; German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, 45147 Essen, Germany
| |
Collapse
|
28
|
Fathi Kazerooni A, Saxena S, Toorens E, Tu D, Bashyam V, Akbari H, Mamourian E, Sako C, Koumenis C, Verginadis I, Verma R, Shinohara RT, Desai AS, Lustig RA, Brem S, Mohan S, Bagley SJ, Ganguly T, O'Rourke DM, Bakas S, Nasrallah MP, Davatzikos C. Clinical measures, radiomics, and genomics offer synergistic value in AI-based prediction of overall survival in patients with glioblastoma. Sci Rep 2022; 12:8784. [PMID: 35610333 DOI: 10.1038/s41598-022-12699-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 05/06/2022] [Indexed: 02/05/2023] Open
Abstract
Multi-omic data, i.e., clinical measures, radiomic, and genetic data, capture multi-faceted tumor characteristics, contributing to a comprehensive patient risk assessment. Here, we investigate the additive value and independent reproducibility of integrated diagnostics in prediction of overall survival (OS) in isocitrate dehydrogenase (IDH)-wildtype GBM patients, by combining conventional and deep learning methods. Conventional radiomics and deep learning features were extracted from pre-operative multi-parametric MRI of 516 GBM patients. Support vector machine (SVM) classifiers were trained on the radiomic features in the discovery cohort (n = 404) to categorize patient groups of high-risk (OS < 6 months) vs all, and low-risk (OS ≥ 18 months) vs all. The trained radiomic model was independently tested in the replication cohort (n = 112) and a patient-wise survival prediction index was produced. Multivariate Cox-PH models were generated for the replication cohort, first based on clinical measures solely, and then by layering on radiomics and molecular information. Evaluation of the high-risk and low-risk classifiers in the discovery/replication cohorts revealed area under the ROC curves (AUCs) of 0.78 (95% CI 0.70-0.85)/0.75 (95% CI 0.64-0.79) and 0.75 (95% CI 0.65-0.84)/0.63 (95% CI 0.52-0.71), respectively. Cox-PH modeling showed a concordance index of 0.65 (95% CI 0.6-0.7) for clinical data improving to 0.75 (95% CI 0.72-0.79) for the combination of all omics. This study signifies the value of integrated diagnostics for improved prediction of OS in GBM.
Collapse
|
29
|
Abstract
The potential to use quantitative image analysis and artificial intelligence is one of the driving forces behind digital pathology. However, despite novel image analysis methods for pathology being described across many publications, few become widely adopted and many are not applied in more than a single study. The explanation is often straightforward: software implementing the method is simply not available, or is too complex, incomplete, or dataset‐dependent for others to use. The result is a disconnect between what seems already possible in digital pathology based upon the literature, and what actually is possible for anyone wishing to apply it using currently available software. This review begins by introducing the main approaches and techniques involved in analysing pathology images. I then examine the practical challenges inherent in taking algorithms beyond proof‐of‐concept, from both a user and developer perspective. I describe the need for a collaborative and multidisciplinary approach to developing and validating meaningful new algorithms, and argue that openness, implementation, and usability deserve more attention among digital pathology researchers. The review ends with a discussion about how digital pathology could benefit from interacting with and learning from the wider bioimage analysis community, particularly with regard to sharing data, software, and ideas. © 2022 The Author. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Peter Bankhead
- Edinburgh Pathology, Institute of Genetics and Cancer, University of Edinburgh, Edinburgh, UK.,Centre for Genomic & Experimental Medicine, Institute of Genetics and Cancer, University of Edinburgh, Edinburgh, UK.,Cancer Research UK Edinburgh Centre, Institute of Genetics and Cancer, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
30
|
Affiliation(s)
- John S H Baxter
- Laboratoire Traitement du Signal et de l'Image (LTSI - INSERM UMR 1099), Universite de Rennes 1, Rennes, France
| | - Pierre Jannin
- Laboratoire Traitement du Signal et de l'Image (LTSI - INSERM UMR 1099), Universite de Rennes 1, Rennes, France
| |
Collapse
|
31
|
Maier-Hein L, Eisenmann M, Sarikaya D, März K, Collins T, Malpani A, Fallert J, Feussner H, Giannarou S, Mascagni P, Nakawala H, Park A, Pugh C, Stoyanov D, Vedula SS, Cleary K, Fichtinger G, Forestier G, Gibaud B, Grantcharov T, Hashizume M, Heckmann-Nötzel D, Kenngott HG, Kikinis R, Mündermann L, Navab N, Onogur S, Roß T, Sznitman R, Taylor RH, Tizabi MD, Wagner M, Hager GD, Neumuth T, Padoy N, Collins J, Gockel I, Goedeke J, Hashimoto DA, Joyeux L, Lam K, Leff DR, Madani A, Marcus HJ, Meireles O, Seitel A, Teber D, Ückert F, Müller-Stich BP, Jannin P, Speidel S. Surgical data science - from concepts toward clinical translation. Med Image Anal 2022; 76:102306. [PMID: 34879287 PMCID: PMC9135051 DOI: 10.1016/j.media.2021.102306] [Citation(s) in RCA: 69] [Impact Index Per Article: 34.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 11/03/2021] [Accepted: 11/08/2021] [Indexed: 02/06/2023]
Abstract
Recent developments in data science in general and machine learning in particular have transformed the way experts envision the future of surgery. Surgical Data Science (SDS) is a new research field that aims to improve the quality of interventional healthcare through the capture, organization, analysis and modeling of data. While an increasing number of data-driven approaches and clinical applications have been studied in the fields of radiological and clinical data science, translational success stories are still lacking in surgery. In this publication, we shed light on the underlying reasons and provide a roadmap for future advances in the field. Based on an international workshop involving leading researchers in the field of SDS, we review current practice, key achievements and initiatives as well as available standards and tools for a number of topics relevant to the field, namely (1) infrastructure for data acquisition, storage and access in the presence of regulatory constraints, (2) data annotation and sharing and (3) data analytics. We further complement this technical perspective with (4) a review of currently available SDS products and the translational progress from academia and (5) a roadmap for faster clinical translation and exploitation of the full potential of SDS, based on an international multi-round Delphi process.
Collapse
Affiliation(s)
- Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany.
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Duygu Sarikaya
- Department of Computer Engineering, Faculty of Engineering, Gazi University, Ankara, Turkey; LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Keno März
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Anand Malpani
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA
| | | | - Hubertus Feussner
- Department of Surgery, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Stamatia Giannarou
- The Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom
| | - Pietro Mascagni
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, Strasbourg, France
| | | | - Adrian Park
- Department of Surgery, Anne Arundel Health System, Annapolis, Maryland, USA; Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Carla Pugh
- Department of Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Swaroop S Vedula
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Kevin Cleary
- The Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, D.C., USA
| | | | - Germain Forestier
- L'Institut de Recherche en Informatique, Mathématiques, Automatique et Signal (IRIMAS), University of Haute-Alsace, Mulhouse, France; Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
| | - Bernard Gibaud
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Teodor Grantcharov
- University of Toronto, Toronto, Ontario, Canada; The Li Ka Shing Knowledge Institute of St. Michael's Hospital, Toronto, Ontario, Canada
| | - Makoto Hashizume
- Kyushu University, Fukuoka, Japan; Kitakyushu Koga Hospital, Fukuoka, Japan
| | - Doreen Heckmann-Nötzel
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Hannes G Kenngott
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Ron Kikinis
- Department of Radiology, Brigham and Women's Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | | | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany; Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Sinan Onogur
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tobias Roß
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Russell H Taylor
- Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Minu D Tizabi
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Martin Wagner
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Gregory D Hager
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, Maryland, USA; Department of Computer Science, The Johns Hopkins University, Baltimore, Maryland, USA
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, Leipzig, Germany
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France; IHU Strasbourg, Strasbourg, France
| | - Justin Collins
- Division of Surgery and Interventional Science, University College London, London, United Kingdom
| | - Ines Gockel
- Department of Visceral, Transplant, Thoracic and Vascular Surgery, Leipzig University Hospital, Leipzig, Germany
| | - Jan Goedeke
- Pediatric Surgery, Dr. von Hauner Children's Hospital, Ludwig-Maximilians-University, Munich, Germany
| | - Daniel A Hashimoto
- University Hospitals Cleveland Medical Center, Case Western Reserve University, Cleveland, Ohio, USA; Surgical AI and Innovation Laboratory, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Luc Joyeux
- My FetUZ Fetal Research Center, Department of Development and Regeneration, Biomedical Sciences, KU Leuven, Leuven, Belgium; Center for Surgical Technologies, Faculty of Medicine, KU Leuven, Leuven, Belgium; Department of Obstetrics and Gynecology, Division Woman and Child, Fetal Medicine Unit, University Hospitals Leuven, Leuven, Belgium; Michael E. DeBakey Department of Surgery, Texas Children's Hospital and Baylor College of Medicine, Houston, Texas, USA
| | - Kyle Lam
- Department of Surgery and Cancer, Imperial College London, London, United Kingdom
| | - Daniel R Leff
- Department of BioSurgery and Surgical Technology, Imperial College London, London, United Kingdom; Hamlyn Centre for Robotic Surgery, Imperial College London, London, United Kingdom; Breast Unit, Imperial Healthcare NHS Trust, London, United Kingdom
| | - Amin Madani
- Department of Surgery, University Health Network, Toronto, Ontario, Canada
| | - Hani J Marcus
- National Hospital for Neurology and Neurosurgery, and UCL Queen Square Institute of Neurology, London, United Kingdom
| | - Ozanan Meireles
- Massachusetts General Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | - Alexander Seitel
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Dogu Teber
- Department of Urology, City Hospital Karlsruhe, Karlsruhe, Germany
| | - Frank Ückert
- Institute for Applied Medical Informatics, Hamburg University Hospital, Hamburg, Germany
| | - Beat P Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Pierre Jannin
- LTSI, Inserm UMR 1099, University of Rennes 1, Rennes, France
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT/UCC) Dresden, Dresden, Germany; Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, Dresden, Germany
| |
Collapse
|
32
|
van den Oever LB, van Veldhuizen WA, Cornelissen LJ, Spoor DS, Willems TP, Kramer G, Stigter T, Rook M, Crijns APG, Oudkerk M, Veldhuis RNJ, de Bock GH, van Ooijen PMA. Qualitative Evaluation of Common Quantitative Metrics for Clinical Acceptance of Automatic Segmentation: a Case Study on Heart Contouring from CT Images by Deep Learning Algorithms. J Digit Imaging 2022; 35:240-247. [PMID: 35083620 PMCID: PMC8921356 DOI: 10.1007/s10278-021-00573-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 11/29/2021] [Accepted: 12/18/2021] [Indexed: 11/28/2022] Open
Abstract
Organs-at-risk contouring is time consuming and labour intensive. Automation by deep learning algorithms would decrease the workload of radiotherapists and technicians considerably. However, the variety of metrics used for the evaluation of deep learning algorithms make the results of many papers difficult to interpret and compare. In this paper, a qualitative evaluation is done on five established metrics to assess whether their values correlate with clinical usability. A total of 377 CT volumes with heart delineations were randomly selected for training and evaluation. A deep learning algorithm was used to predict the contours of the heart. A total of 101 CT slices from the validation set with the predicted contours were shown to three experienced radiologists. They examined each slice independently whether they would accept or adjust the prediction and if there were (small) mistakes. For each slice, the scores of this qualitative evaluation were then compared with the Sørensen-Dice coefficient (DC), the Hausdorff distance (HD), pixel-wise accuracy, sensitivity and precision. The statistical analysis of the qualitative evaluation and metrics showed a significant correlation. Of the slices with a DC over 0.96 (N = 20) or a 95% HD under 5 voxels (N = 25), no slices were rejected by the readers. Contours with lower DC or higher HD were seen in both rejected and accepted contours. Qualitative evaluation shows that it is difficult to use common quantification metrics as indicator for use in clinic. We might need to change the reporting of quantitative metrics to better reflect clinical acceptance.
Collapse
Affiliation(s)
- L B van den Oever
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ, Groningen, The Netherlands
| | - W A van Veldhuizen
- Department of Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ, Groningen, The Netherlands
| | - L J Cornelissen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ, Groningen, The Netherlands
| | - D S Spoor
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ, Groningen, The Netherlands
| | - T P Willems
- Department of Radiology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ, Groningen, The Netherlands
| | - G Kramer
- Department of Radiology, Martini Hospital, Van Swietenplein 1, 9728 NT, Groningen, The Netherlands
| | - T Stigter
- Department of Radiology, Martini Hospital, Van Swietenplein 1, 9728 NT, Groningen, The Netherlands
| | - M Rook
- Department of Radiology, Martini Hospital, Van Swietenplein 1, 9728 NT, Groningen, The Netherlands
| | - A P G Crijns
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ, Groningen, The Netherlands
| | - M Oudkerk
- Faculty of Medical Sciences, University of Groningen, Groningen, The Netherlands
| | - R N J Veldhuis
- Department of Electrical Engineering, Computer Science and Mathematics, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - G H de Bock
- Department of Epidemiology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ, Groningen, The Netherlands
| | - P M A van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713GZ, Groningen, The Netherlands.
| |
Collapse
|
33
|
Oreiller V, Andrearczyk V, Jreige M, Boughdad S, Elhalawani H, Castelli J, Vallières M, Zhu S, Xie J, Peng Y, Iantsen A, Hatt M, Yuan Y, Ma J, Yang X, Rao C, Pai S, Ghimire K, Feng X, Naser MA, Fuller CD, Yousefirizi F, Rahmim A, Chen H, Wang L, Prior JO, Depeursinge A. Head and neck tumor segmentation in PET/CT: The HECKTOR challenge. Med Image Anal 2021; 77:102336. [PMID: 35016077 DOI: 10.1016/j.media.2021.102336] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 10/13/2021] [Accepted: 12/14/2021] [Indexed: 12/23/2022]
Abstract
This paper relates the post-analysis of the first edition of the HEad and neCK TumOR (HECKTOR) challenge. This challenge was held as a satellite event of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2020, and was the first of its kind focusing on lesion segmentation in combined FDG-PET and CT image modalities. The challenge's task is the automatic segmentation of the Gross Tumor Volume (GTV) of Head and Neck (H&N) oropharyngeal primary tumors in FDG-PET/CT images. To this end, the participants were given a training set of 201 cases from four different centers and their methods were tested on a held-out set of 53 cases from a fifth center. The methods were ranked according to the Dice Score Coefficient (DSC) averaged across all test cases. An additional inter-observer agreement study was organized to assess the difficulty of the task from a human perspective. 64 teams registered to the challenge, among which 10 provided a paper detailing their approach. The best method obtained an average DSC of 0.7591, showing a large improvement over our proposed baseline method and the inter-observer agreement, associated with DSCs of 0.6610 and 0.61, respectively. The automatic methods proved to successfully leverage the wealth of metabolic and structural properties of combined PET and CT modalities, significantly outperforming human inter-observer agreement level, semi-automatic thresholding based on PET images as well as other single modality-based methods. This promising performance is one step forward towards large-scale radiomics studies in H&N cancer, obviating the need for error-prone and time-consuming manual delineation of GTVs.
Collapse
Affiliation(s)
- Valentin Oreiller
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland.
| | - Vincent Andrearczyk
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland
| | - Mario Jreige
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Sarah Boughdad
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Hesham Elhalawani
- Department of Radiation Oncology, Brigham and Women's Hospital and Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Joel Castelli
- Radiotherapy Department, Cancer Institute Eugène Marquis, Rennes, France
| | - Martin Vallières
- Department of Computer Science, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | - Simeng Zhu
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, MI, USA
| | - Juanying Xie
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, PR China
| | - Ying Peng
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, PR China
| | - Andrei Iantsen
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Yading Yuan
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Jun Ma
- Department of Mathematics, Nanjing University of Science and Technology, Jiangsu, China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Jiangsu, China
| | - Chinmay Rao
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Suraj Pai
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | | | - Xue Feng
- Carina Medical, Lexington, KY, 40513, USA; Department of Biomedical Engineering, University of Virginia, Charlottesville VA 22903, USA
| | - Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030, USA
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver BC, Canada
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver BC, Canada
| | - Huai Chen
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shangai Jiao Tong University, Shanghai 200240, People's Republic of China
| | - Lisheng Wang
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shangai Jiao Tong University, Shanghai 200240, People's Republic of China
| | - John O Prior
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Adrien Depeursinge
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| |
Collapse
|
34
|
Linguraru MG, Maier-Hein L, Summers RM, Kahn CE. RSNA-MICCAI Panel Discussion: 2. Leveraging the Full Potential of AI-Radiologists and Data Scientists Working Together. Radiol Artif Intell 2021; 3:e210248. [PMID: 34870225 DOI: 10.1148/ryai.2021210248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 10/13/2021] [Accepted: 10/13/2021] [Indexed: 11/11/2022]
Abstract
In March 2021, the Radiological Society of North America hosted a virtual panel discussion with members of the Medical Image Computing and Computer Assisted Intervention Society. Both organizations share a vision to develop radiologic and medical imaging techniques through advanced quantitative imaging biomarkers and artificial intelligence. The panel addressed how radiologists and data scientists can collaborate to advance the science of AI in radiology. Keywords: Adults and Pediatrics, Segmentation, Feature Detection, Quantification, Diagnosis/Classification, Prognosis/Classification © RSNA, 2021.
Collapse
Affiliation(s)
- Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC (M.G.L.); Department of Computer Assisted Medical Interventions, German Cancer Research Centre, Heidelberg, Germany (L.M.H.); Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); and Department of Radiology, University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA 19104 (C.E.K.)
| | - Lena Maier-Hein
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC (M.G.L.); Department of Computer Assisted Medical Interventions, German Cancer Research Centre, Heidelberg, Germany (L.M.H.); Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); and Department of Radiology, University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA 19104 (C.E.K.)
| | - Ronald M Summers
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC (M.G.L.); Department of Computer Assisted Medical Interventions, German Cancer Research Centre, Heidelberg, Germany (L.M.H.); Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); and Department of Radiology, University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA 19104 (C.E.K.)
| | - Charles E Kahn
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC (M.G.L.); Department of Computer Assisted Medical Interventions, German Cancer Research Centre, Heidelberg, Germany (L.M.H.); Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); and Department of Radiology, University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA 19104 (C.E.K.)
| |
Collapse
|
35
|
Abstract
Computer-aided-diagnosis and stratification of COVID-19 based on chest X-ray suffers from weak bias assessment and limited quality-control. Undetected bias induced by inappropriate use of datasets, and improper consideration of confounders prevents the translation of prediction models into clinical practice. By adopting established tools for model evaluation to the task of evaluating datasets, this study provides a systematic appraisal of publicly available COVID-19 chest X-ray datasets, determining their potential use and evaluating potential sources of bias. Only 9 out of more than a hundred identified datasets met at least the criteria for proper assessment of risk of bias and could be analysed in detail. Remarkably most of the datasets utilised in 201 papers published in peer-reviewed journals, are not among these 9 datasets, thus leading to models with high risk of bias. This raises concerns about the suitability of such models for clinical use. This systematic review highlights the limited description of datasets employed for modelling and aids researchers to select the most suitable datasets for their task.
Collapse
Affiliation(s)
- Beatriz Garcia Santa Cruz
- Centre Hospitalier de Luxembourg, 4, Rue Ernest Barble, Luxembourg L-1210, Luxembourg; Luxembourg Centre for Systems Biomedicine, University of Luxembourg, 7, Avenue des Hauts Fourneaux, Esch-sur-Alzette L-4362, Luxembourg.
| | - Matías Nicolás Bossa
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, 7, Avenue des Hauts Fourneaux, Esch-sur-Alzette L-4362, Luxembourg; Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Pleinlaan 2, Brussels B-1050, Belgium
| | - Jan Sölter
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, 7, Avenue des Hauts Fourneaux, Esch-sur-Alzette L-4362, Luxembourg
| | - Andreas Dominik Husch
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, 7, Avenue des Hauts Fourneaux, Esch-sur-Alzette L-4362, Luxembourg
| |
Collapse
|
36
|
Garcia Santa Cruz B, Bossa MN, Sölter J, Husch AD. Public Covid-19 X-ray datasets and their impact on model bias - A systematic review of a significant problem. Med Image Anal 2021; 74:102225. [PMID: 34597937 DOI: 10.1016/j.media.2021.102225] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 08/29/2021] [Accepted: 09/02/2021] [Indexed: 12/23/2022]
Abstract
Computer-aided-diagnosis and stratification of COVID-19 based on chest X-ray suffers from weak bias assessment and limited quality-control. Undetected bias induced by inappropriate use of datasets, and improper consideration of confounders prevents the translation of prediction models into clinical practice. By adopting established tools for model evaluation to the task of evaluating datasets, this study provides a systematic appraisal of publicly available COVID-19 chest X-ray datasets, determining their potential use and evaluating potential sources of bias. Only 9 out of more than a hundred identified datasets met at least the criteria for proper assessment of risk of bias and could be analysed in detail. Remarkably most of the datasets utilised in 201 papers published in peer-reviewed journals, are not among these 9 datasets, thus leading to models with high risk of bias. This raises concerns about the suitability of such models for clinical use. This systematic review highlights the limited description of datasets employed for modelling and aids researchers to select the most suitable datasets for their task.
Collapse
|
37
|
Timmins KM, van der Schaaf IC, Bennink E, Ruigrok YM, An X, Baumgartner M, Bourdon P, De Feo R, Noto TD, Dubost F, Fava-Sanches A, Feng X, Giroud C, Group I, Hu M, Jaeger PF, Kaiponen J, Klimont M, Li Y, Li H, Lin Y, Loehr T, Ma J, Maier-Hein KH, Marie G, Menze B, Richiardi J, Rjiba S, Shah D, Shit S, Tohka J, Urruty T, Walińska U, Yang X, Yang Y, Yin Y, Velthuis BK, Kuijf HJ. Comparing methods of detecting and segmenting unruptured intracranial aneurysms on TOF-MRAS: The ADAM challenge. Neuroimage 2021; 238:118216. [PMID: 34052465 DOI: 10.1016/j.neuroimage.2021.118216] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 05/11/2021] [Accepted: 05/26/2021] [Indexed: 11/24/2022] Open
Abstract
Accurate detection and quantification of unruptured intracranial aneurysms (UIAs) is important for rupture risk assessment and to allow an informed treatment decision to be made. Currently, 2D manual measures used to assess UIAs on Time-of-Flight magnetic resonance angiographies (TOF-MRAs) lack 3D information and there is substantial inter-observer variability for both aneurysm detection and assessment of aneurysm size and growth. 3D measures could be helpful to improve aneurysm detection and quantification but are time-consuming and would therefore benefit from a reliable automatic UIA detection and segmentation method. The Aneurysm Detection and segMentation (ADAM) challenge was organised in which methods for automatic UIA detection and segmentation were developed and submitted to be evaluated on a diverse clinical TOF-MRA dataset. A training set (113 cases with a total of 129 UIAs) was released, each case including a TOF-MRA, a structural MR image (T1, T2 or FLAIR), annotation of any present UIA(s) and the centre voxel of the UIA(s). A test set of 141 cases (with 153 UIAs) was used for evaluation. Two tasks were proposed: (1) detection and (2) segmentation of UIAs on TOF-MRAs. Teams developed and submitted containerised methods to be evaluated on the test set. Task 1 was evaluated using metrics of sensitivity and false positive count. Task 2 was evaluated using dice similarity coefficient, modified hausdorff distance (95th percentile) and volumetric similarity. For each task, a ranking was made based on the average of the metrics. In total, eleven teams participated in task 1 and nine of those teams participated in task 2. Task 1 was won by a method specifically designed for the detection task (i.e. not participating in task 2). Based on segmentation metrics, the top two methods for task 2 performed statistically significantly better than all other methods. The detection performance of the top-ranking methods was comparable to visual inspection for larger aneurysms. Segmentation performance of the top ranking method, after selection of true UIAs, was similar to interobserver performance. The ADAM challenge remains open for future submissions and improved submissions, with a live leaderboard to provide benchmarking for method developments at https://adam.isi.uu.nl/.
Collapse
|
38
|
Leuschner J, Schmidt M, Baguer DO, Maass P. LoDoPaB-CT, a benchmark dataset for low-dose computed tomography reconstruction. Sci Data 2021; 8:109. [PMID: 33863917 PMCID: PMC8052416 DOI: 10.1038/s41597-021-00893-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Accepted: 03/04/2021] [Indexed: 11/28/2022] Open
Abstract
Deep learning approaches for tomographic image reconstruction have become very effective and have been demonstrated to be competitive in the field. Comparing these approaches is a challenging task as they rely to a great extent on the data and setup used for training. With the Low-Dose Parallel Beam (LoDoPaB)-CT dataset, we provide a comprehensive, open-access database of computed tomography images and simulated low photon count measurements. It is suitable for training and comparing deep learning methods as well as classical reconstruction approaches. The dataset contains over 40000 scan slices from around 800 patients selected from the LIDC/IDRI database. The data selection and simulation setup are described in detail, and the generating script is publicly accessible. In addition, we provide a Python library for simplified access to the dataset and an online reconstruction challenge. Furthermore, the dataset can also be used for transfer learning as well as sparse and limited-angle reconstruction scenarios. Measurement(s) | Low Dose Computed Tomography of the Chest • feature extraction objective | Technology Type(s) | digital curation • image processing technique | Sample Characteristic - Organism | Homo sapiens |
Machine-accessible metadata file describing the reported data: 10.6084/m9.figshare.13526360
Collapse
Affiliation(s)
- Johannes Leuschner
- Center for Industrial Mathematics, University of Bremen, Bibliothekstr. 5, 28359, Bremen, Germany.
| | - Maximilian Schmidt
- Center for Industrial Mathematics, University of Bremen, Bibliothekstr. 5, 28359, Bremen, Germany.
| | - Daniel Otero Baguer
- Center for Industrial Mathematics, University of Bremen, Bibliothekstr. 5, 28359, Bremen, Germany
| | - Peter Maass
- Center for Industrial Mathematics, University of Bremen, Bibliothekstr. 5, 28359, Bremen, Germany
| |
Collapse
|
39
|
Maier-Hein L, Wagner M, Ross T, Reinke A, Bodenstedt S, Full PM, Hempe H, Mindroc-Filimon D, Scholz P, Tran TN, Bruno P, Kisilenko A, Müller B, Davitashvili T, Capek M, Tizabi MD, Eisenmann M, Adler TJ, Gröhl J, Schellenberg M, Seidlitz S, Lai TYE, Pekdemir B, Roethlingshoefer V, Both F, Bittel S, Mengler M, Mündermann L, Apitz M, Kopp-Schneider A, Speidel S, Nickel F, Probst P, Kenngott HG, Müller-Stich BP. Heidelberg colorectal data set for surgical data science in the sensor operating room. Sci Data 2021; 8:101. [PMID: 33846356 PMCID: PMC8042116 DOI: 10.1038/s41597-021-00882-2] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Accepted: 02/24/2021] [Indexed: 11/30/2022] Open
Abstract
Image-based tracking of medical instruments is an integral part of surgical data science applications. Previous research has addressed the tasks of detecting, segmenting and tracking medical instruments based on laparoscopic video data. However, the proposed methods still tend to fail when applied to challenging images and do not generalize well to data they have not been trained on. This paper introduces the Heidelberg Colorectal (HeiCo) data set - the first publicly available data set enabling comprehensive benchmarking of medical instrument detection and segmentation algorithms with a specific emphasis on method robustness and generalization capabilities. Our data set comprises 30 laparoscopic videos and corresponding sensor data from medical devices in the operating room for three different types of laparoscopic surgery. Annotations include surgical phase labels for all video frames as well as information on instrument presence and corresponding instance-wise segmentation masks for surgical instruments (if any) in more than 10,000 individual frames. The data has successfully been used to organize international competitions within the Endoscopic Vision Challenges 2017 and 2019.
Collapse
Affiliation(s)
- Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany.
| | - Martin Wagner
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Tobias Ross
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- Heidelberg University, Seminarstraße 2, 69117, Heidelberg, Germany
| | - Annika Reinke
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- Heidelberg University, Seminarstraße 2, 69117, Heidelberg, Germany
| | - Sebastian Bodenstedt
- Division of Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Fetscherstraße 74, 01307, Dresden, Germany
| | - Peter M Full
- Heidelberg University, Seminarstraße 2, 69117, Heidelberg, Germany
- Division of Medical Image Computing (MIC), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Hellena Hempe
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Diana Mindroc-Filimon
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Patrick Scholz
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- HIDSS4Health - Helmholtz Information and Data Science School for Health, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Thuy Nuong Tran
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Pierangela Bruno
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- Department of Mathematics and Computer Science, University of Calabria, Via Pietro Bucci, 87036 Arcavacata, Rende, CS, Italy
| | - Anna Kisilenko
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Benjamin Müller
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Tornike Davitashvili
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Manuela Capek
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Minu D Tizabi
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Tim J Adler
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Janek Gröhl
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Melanie Schellenberg
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Silvia Seidlitz
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- HIDSS4Health - Helmholtz Information and Data Science School for Health, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - T Y Emmy Lai
- Division of Medical Image Computing (MIC), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Bünyamin Pekdemir
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | | | - Fabian Both
- understandAI GmbH, An der RaumFabrik 34, 76227, Karlsruhe, Germany
- International Max Planck Research School for Intelligent Systems Tuebingen, University of Tuebingen, Geschwister-Scholl-Platz, 72074, Tübingen, Germany
| | - Sebastian Bittel
- understandAI GmbH, An der RaumFabrik 34, 76227, Karlsruhe, Germany
- BMW Group, Heidemannstraße 164, 80939, Munich, Germany
| | - Marc Mengler
- understandAI GmbH, An der RaumFabrik 34, 76227, Karlsruhe, Germany
| | - Lars Mündermann
- Corporate Research & Technology, Data-Assisted Solutions, KARL STORZ SE & Co. KG, Dr.-Karl-Storz-Straße 34, 78532, Tuttlingen, Germany
| | - Martin Apitz
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Annette Kopp-Schneider
- Division of Biostatistics, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 581, 69120, Heidelberg, Germany
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Fetscherstraße 74, 01307, Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), TU Dresden, 01307, Dresden, Germany
| | - Felix Nickel
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Pascal Probst
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Hannes G Kenngott
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany
| | - Beat P Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany.
| |
Collapse
|
40
|
Reinke A. Bring the model to the data: The Deep Learning Epilepsy Detection Challenge. EBioMedicine 2021; 66:103323. [PMID: 33857902 PMCID: PMC8050851 DOI: 10.1016/j.ebiom.2021.103323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 03/19/2021] [Indexed: 10/26/2022] Open
Affiliation(s)
- Annika Reinke
- German Cancer Research Center DKFZ, Division of Computer Assisted Medical Interventions, Heidelberg, Germany.
| |
Collapse
|
41
|
Grammatikopoulou M, Flouty E, Kadkhodamohammadi A, Quellec G, Chow A, Nehme J, Luengo I, Stoyanov D. CaDIS: Cataract dataset for surgical RGB-image segmentation. Med Image Anal 2021; 71:102053. [PMID: 33864969 DOI: 10.1016/j.media.2021.102053] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 03/17/2021] [Accepted: 03/23/2021] [Indexed: 01/02/2023]
Abstract
Video feedback provides a wealth of information about surgical procedures and is the main sensory cue for surgeons. Scene understanding is crucial to computer assisted interventions (CAI) and to post-operative analysis of the surgical procedure. A fundamental building block of such capabilities is the identification and localization of surgical instruments and anatomical structures through semantic segmentation. Deep learning has advanced semantic segmentation techniques in the recent years but is inherently reliant on the availability of labelled datasets for model training. This paper introduces a dataset for semantic segmentation of cataract surgery videos complementing the publicly available CATARACTS challenge dataset. In addition, we benchmark the performance of several state-of-the-art deep learning models for semantic segmentation on the presented dataset. The dataset is publicly available at https://cataracts-semantic-segmentation2020.grand-challenge.org/.
Collapse
Affiliation(s)
| | | | | | | | - Andre Chow
- Digital Surgery LTD, 230 City Road, London, EC1V 2QY, UK
| | - Jean Nehme
- Digital Surgery LTD, 230 City Road, London, EC1V 2QY, UK
| | - Imanol Luengo
- Digital Surgery LTD, 230 City Road, London, EC1V 2QY, UK
| | - Danail Stoyanov
- Digital Surgery LTD, 230 City Road, London, EC1V 2QY, UK; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower Street, London, WC1E 6BT, UK
| |
Collapse
|
42
|
Vega C. From Hume to Wuhan: An Epistemological Journey on the Problem of Induction in COVID-19 Machine Learning Models and its Impact Upon Medical Research. IEEE Access 2021; 9:97243-97250. [PMID: 34812399 PMCID: PMC8545192 DOI: 10.1109/access.2021.3095222] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 07/03/2021] [Indexed: 05/08/2023]
Abstract
Advances in computer science have transformed the way artificial intelligence is employed in academia, with Machine Learning (ML) methods easily available to researchers from diverse areas thanks to intuitive frameworks that yield extraordinary results. Notwithstanding, current trends in the mainstream ML community tend to emphasise wins over knowledge, putting the scientific method aside, and focusing on maximising metrics of interest. Methodological flaws lead to poor justification of method choice, which in turn leads to disregard the limitations of the methods employed, ultimately putting at risk the translation of solutions into real-world clinical settings. This work exemplifies the impact of the problem of induction in medical research, studying the methodological issues of recent solutions for computer-aided diagnosis of COVID-19 from chest X-Ray images.
Collapse
Affiliation(s)
- Carlos Vega
- Luxembourg Centre for Systems Biomedicine, Bioinformatics Core GroupUniversité du Luxembourg 4365 Esch-sur-Alzette Luxembourg
| |
Collapse
|
43
|
Espinoza JL, Dong LT. Artificial Intelligence Tools for Refining Lung Cancer Screening. J Clin Med 2020; 9:E3860. [PMID: 33261057 DOI: 10.3390/jcm9123860] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 11/19/2020] [Accepted: 11/25/2020] [Indexed: 12/19/2022] Open
Abstract
Nearly one-quarter of all cancer deaths worldwide are due to lung cancer, making this disease the leading cause of cancer death among both men and women. The most important determinant of survival in lung cancer is the disease stage at diagnosis, thus developing an effective screening method for early diagnosis has been a long-term goal in lung cancer care. In the last decade, and based on the results of large clinical trials, lung cancer screening programs using low-dose computer tomography (LDCT) in high-risk individuals have been implemented in some clinical settings, however, this method has various limitations, especially a high false-positive rate which eventually results in a number of unnecessary diagnostic and therapeutic interventions among the screened subjects. By using complex algorithms and software, artificial intelligence (AI) is capable to emulate human cognition in the analysis, interpretation, and comprehension of complicated data and currently, it is being successfully applied in various healthcare settings. Taking advantage of the ability of AI to quantify information from images, and its superior capability in recognizing complex patterns in images compared to humans, AI has the potential to aid clinicians in the interpretation of LDCT images obtained in the setting of lung cancer screening. In the last decade, several AI models aimed to improve lung cancer detection have been reported. Some algorithms performed equal or even outperformed experienced radiologists in distinguishing benign from malign lung nodules and some of those models improved diagnostic accuracy and decreased the false-positive rate. Here, we discuss recent publications in which AI algorithms are utilized to assess chest computer tomography (CT) scans imaging obtaining in the setting of lung cancer screening.
Collapse
|