1
|
Gut D, Trombini M, Kucybała I, Krupa K, Rozynek M, Dellepiane S, Tabor Z, Wojciechowski W. Use of superpixels for improvement of inter-rater and intra-rater reliability during annotation of medical images. Med Image Anal 2024; 94:103141. [PMID: 38489896 DOI: 10.1016/j.media.2024.103141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 01/29/2024] [Accepted: 03/05/2024] [Indexed: 03/17/2024]
Abstract
In the context of automatic medical image segmentation based on statistical learning, raters' variability of ground truth segmentations in training datasets is a widely recognized issue. Indeed, the reference information is provided by experts but bias due to their knowledge may affect the quality of the ground truth data, thus hindering creation of robust and reliable datasets employed in segmentation, classification or detection tasks. In such a framework, automatic medical image segmentation would significantly benefit from utilizing some form of presegmentation during training data preparation process, which could lower the impact of experts' knowledge and reduce time-consuming labeling efforts. The present manuscript proposes a superpixels-driven procedure for annotating medical images. Three different superpixeling methods with two different number of superpixels were evaluated on three different medical segmentation tasks and compared with manual annotations. Within the superpixels-based annotation procedure medical experts interactively select superpixels of interest, apply manual corrections, when necessary, and then the accuracy of the annotations, the time needed to prepare them, and the number of manual corrections are assessed. In this study, it is proven that the proposed procedure reduces inter- and intra-rater variability leading to more reliable annotations datasets which, in turn, may be beneficial for the development of more robust classification or segmentation models. In addition, the proposed approach reduces time needed to prepare the annotations.
Collapse
Affiliation(s)
- Daniel Gut
- Department of Biocybernetics and Biomedical Engineering, AGH University of Krakow, al. Mickiewicza 30, 30-059 Krakow, Poland.
| | - Marco Trombini
- Department of Electric, Electronic, and Telecommunication Engineering and Naval Architecture - DITEN, Università degli Studi di Genova, Via all'Opera Pia 11, 16145 Genoa, Italy
| | - Iwona Kucybała
- Department of Radiology, Jagiellonian University Medical College, ul. Kopernika 19, 31-501 Krakow, Poland
| | - Kamil Krupa
- Department of Radiology, Jagiellonian University Medical College, ul. Kopernika 19, 31-501 Krakow, Poland
| | - Miłosz Rozynek
- Department of Radiology, Jagiellonian University Medical College, ul. Kopernika 19, 31-501 Krakow, Poland
| | - Silvana Dellepiane
- Department of Electric, Electronic, and Telecommunication Engineering and Naval Architecture - DITEN, Università degli Studi di Genova, Via all'Opera Pia 11, 16145 Genoa, Italy
| | - Zbisław Tabor
- Department of Biocybernetics and Biomedical Engineering, AGH University of Krakow, al. Mickiewicza 30, 30-059 Krakow, Poland
| | - Wadim Wojciechowski
- Department of Radiology, Jagiellonian University Medical College, ul. Kopernika 19, 31-501 Krakow, Poland
| |
Collapse
|
2
|
Ou Z, Bai J, Chen Z, Lu Y, Wang H, Long S, Chen G. RTSeg-net: A lightweight network for real-time segmentation of fetal head and pubic symphysis from intrapartum ultrasound images. Comput Biol Med 2024; 175:108501. [PMID: 38703545 DOI: 10.1016/j.compbiomed.2024.108501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 03/19/2024] [Accepted: 04/21/2024] [Indexed: 05/06/2024]
Abstract
The segmentation of the fetal head (FH) and pubic symphysis (PS) from intrapartum ultrasound images plays a pivotal role in monitoring labor progression and informing crucial clinical decisions. Achieving real-time segmentation with high accuracy on systems with limited hardware capabilities presents significant challenges. To address these challenges, we propose the real-time segmentation network (RTSeg-Net), a groundbreaking lightweight deep learning model that incorporates innovative distribution shifting convolutional blocks, tokenized multilayer perceptron blocks, and efficient feature fusion blocks. Designed for optimal computational efficiency, RTSeg-Net minimizes resource demand while significantly enhancing segmentation performance. Our comprehensive evaluation on two distinct intrapartum ultrasound image datasets reveals that RTSeg-Net achieves segmentation accuracy on par with more complex state-of-the-art networks, utilizing merely 1.86 M parameters-just 6 % of their hyperparameters-and operating seven times faster, achieving a remarkable rate of 31.13 frames per second on a Jetson Nano, a device known for its limited computing capacity. These achievements underscore RTSeg-Net's potential to provide accurate, real-time segmentation on low-power devices, broadening the scope for its application across various stages of labor. By facilitating real-time, accurate ultrasound image analysis on portable, low-cost devices, RTSeg-Net promises to revolutionize intrapartum monitoring, making sophisticated diagnostic tools accessible to a wider range of healthcare settings.
Collapse
Affiliation(s)
- Zhanhong Ou
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Jieyun Bai
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China; Auckland Bioengineering Institute, University of Auckland, Auckland, 1010, New Zealand.
| | - Zhide Chen
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Yaosheng Lu
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Huijin Wang
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Shun Long
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Gaowen Chen
- Obstetrics and Gynecology Center, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
| |
Collapse
|
3
|
Luecken MD, Gigante S, Burkhardt DB, Cannoodt R, Strobl DC, Markov NS, Zappia L, Palla G, Lewis W, Dimitrov D, Vinyard ME, Magruder DS, Andersson A, Dann E, Qin Q, Otto DJ, Klein M, Botvinnik OB, Deconinck L, Waldrant K, Bloom JM, Pisco AO, Saez-Rodriguez J, Wulsin D, Pinello L, Saeys Y, Theis FJ, Krishnaswamy S. Defining and benchmarking open problems in single-cell analysis. Res Sq 2024:rs.3.rs-4181617. [PMID: 38645152 PMCID: PMC11030530 DOI: 10.21203/rs.3.rs-4181617/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
With the growing number of single-cell analysis tools, benchmarks are increasingly important to guide analysis and method development. However, a lack of standardisation and extensibility in current benchmarks limits their usability, longevity, and relevance to the community. We present Open Problems, a living, extensible, community-guided benchmarking platform including 10 current single-cell tasks that we envision will raise standards for the selection, evaluation, and development of methods in single-cell analysis.
Collapse
Affiliation(s)
- Malte D Luecken
- Institute of computational Biology, Helmholtz Munich, Neuherberg, Germany
- Institute of Lung Health & Immunity, Helmholtz Munich; Member of the German Center for Lung Research (DZL), Munich, Germany
| | | | | | - Robrecht Cannoodt
- Data Intuitive, Lebbeke, Belgium
- Data Mining and Modelling for Biomedicine group, VIB Center for Inflammation Research, Ghent, Belgium
- Department of Applied Mathematics, Computer Science, and Statistics, Ghent University, Ghent, Belgium
| | - Daniel C Strobl
- Institute of computational Biology, Helmholtz Munich, Neuherberg, Germany
- Institute of Clinical Chemistry and Pathobiochemistry, School of Medicine, Technical University of Munich, Munich, Germany
- TUM School of Life Sciences Weihenstephan, Technical University of Munich, Germany
| | - Nikolay S Markov
- Division of Pulmonary and Critical Care Medicine, Feinberg School of Medicine, Northwestern University
| | - Luke Zappia
- Institute of computational Biology, Helmholtz Munich, Neuherberg, Germany
- Department of Mathematics, School of Computing, Information and Technology, Technical University of Munich, Munich, Germany
| | - Giovanni Palla
- Institute of computational Biology, Helmholtz Munich, Neuherberg, Germany
- TUM School of Life Sciences Weihenstephan, Technical University of Munich, Germany
| | - Wesley Lewis
- Interdepartmental Program in Computational Biology and Bioinformatics, Yale University, New Haven, CT 06511, USA
| | - Daniel Dimitrov
- Heidelberg University, Faculty of Medicine, and Heidelberg University Hospital, Institute for Computational Biomedicine, Heidelberg, Germany
| | - Michael E Vinyard
- Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA
- Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Molecular Pathology Unit, Center for Cancer Research, Massachusetts General Hospital, Boston, MA, USA
| | - D S Magruder
- Department of Computer Science, Yale University, New Haven CT, USA
| | - Alma Andersson
- Genentech Inc
- Royal Institute of Technology (KTH), Gene Technology
- Science for Life Laboratory (SciLifeLab)
| | - Emma Dann
- Wellcome Sanger Institute, Wellcome Genome Campus, Cambridge, UK
| | - Qian Qin
- Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Dominik J Otto
- Basic Sciences Division, Fred Hutchinson Cancer Center, Seattle WA
- Computational Biology Program, Public Health Sciences Division, Seattle WA
- Translational Data Science IRC, Fred Hutchinson Cancer Center, Seattle WA
| | | | - Olga Borisovna Botvinnik
- Data Sciences Platform, Chan Zuckerberg Biohub, 499 Illinois St, San Francisco, CA 94158
- Bridge Bio Pharma, 3160 Porter Drive, Suite 250, Palo Alto, CA, 94304
| | - Louise Deconinck
- Data Mining and Modelling for Biomedicine group, VIB Center for Inflammation Research, Ghent, Belgium
- Department of Applied Mathematics, Computer Science, and Statistics, Ghent University, Ghent, Belgium
| | | | | | - Angela Oliveira Pisco
- Data Sciences Platform, Chan Zuckerberg Biohub, 499 Illinois St, San Francisco, CA 94158
- Insitro, South San Francisco
| | - Julio Saez-Rodriguez
- Heidelberg University, Faculty of Medicine, and Heidelberg University Hospital, Institute for Computational Biomedicine, Heidelberg, Germany
| | | | - Luca Pinello
- Molecular Pathology Unit, Center for Cancer Research, Massachusetts General Hospital, Boston, MA, USA
| | - Yvan Saeys
- Data Mining and Modelling for Biomedicine group, VIB Center for Inflammation Research, Ghent, Belgium
- Department of Applied Mathematics, Computer Science, and Statistics, Ghent University, Ghent, Belgium
- VIB Center for AI & Computational Biology (VIB.AI), Gent, Belgium
| | - Fabian J Theis
- Institute of computational Biology, Helmholtz Munich, Neuherberg, Germany
- Department of Mathematics, School of Computing, Information and Technology, Technical University of Munich, Munich, Germany
- Cellular Genetics Programme, Wellcome Sanger Institute, Hinxton, UK (associated faculty)
| | - Smita Krishnaswamy
- Interdepartmental Program in Computational Biology and Bioinformatics, Yale University, New Haven, CT 06511, USA
- Department of Computer Science, Yale University, New Haven CT, USA
- Department of Genetics, Yale University, New Haven CT, USA
| |
Collapse
|
4
|
Hashemi Gheinani A, Kim J, You S, Adam RM. Bioinformatics in urology - molecular characterization of pathophysiology and response to treatment. Nat Rev Urol 2024; 21:214-242. [PMID: 37604982 DOI: 10.1038/s41585-023-00805-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/13/2023] [Indexed: 08/23/2023]
Abstract
The application of bioinformatics has revolutionized the practice of medicine in the past 20 years. From early studies that uncovered subtypes of cancer to broad efforts spearheaded by the Cancer Genome Atlas initiative, the use of bioinformatics strategies to analyse high-dimensional data has provided unprecedented insights into the molecular basis of disease. In addition to the identification of disease subtypes - which enables risk stratification - informatics analysis has facilitated the identification of novel risk factors and drivers of disease, biomarkers of progression and treatment response, as well as possibilities for drug repurposing or repositioning; moreover, bioinformatics has guided research towards precision and personalized medicine. Implementation of specific computational approaches such as artificial intelligence, machine learning and molecular subtyping has yet to become widespread in urology clinical practice for reasons of cost, disruption of clinical workflow and need for prospective validation of informatics approaches in independent patient cohorts. Solving these challenges might accelerate routine integration of bioinformatics into clinical settings.
Collapse
Affiliation(s)
- Ali Hashemi Gheinani
- Department of Urology, Boston Children's Hospital, Boston, MA, USA
- Department of Surgery, Harvard Medical School, Boston, MA, USA
- Broad Institute of MIT and Harvard, Cambridge, MA, USA
- Department of Urology, Inselspital, Bern, Switzerland
- Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Jina Kim
- Department of Urology, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Samuel Oschin Comprehensive Cancer Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Sungyong You
- Department of Urology, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Samuel Oschin Comprehensive Cancer Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Rosalyn M Adam
- Department of Urology, Boston Children's Hospital, Boston, MA, USA.
- Department of Surgery, Harvard Medical School, Boston, MA, USA.
- Broad Institute of MIT and Harvard, Cambridge, MA, USA.
| |
Collapse
|
5
|
Ma J, Xie R, Ayyadhury S, Ge C, Gupta A, Gupta R, Gu S, Zhang Y, Lee G, Kim J, Lou W, Li H, Upschulte E, Dickscheid T, de Almeida JG, Wang Y, Han L, Yang X, Labagnara M, Gligorovski V, Scheder M, Rahi SJ, Kempster C, Pollitt A, Espinosa L, Mignot T, Middeke JM, Eckardt JN, Li W, Li Z, Cai X, Bai B, Greenwald NF, Van Valen D, Weisbart E, Cimini BA, Cheung T, Brück O, Bader GD, Wang B. The multimodality cell segmentation challenge: toward universal solutions. Nat Methods 2024:10.1038/s41592-024-02233-6. [PMID: 38532015 DOI: 10.1038/s41592-024-02233-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 03/04/2024] [Indexed: 03/28/2024]
Abstract
Cell segmentation is a critical step for quantitative single-cell analysis in microscopy images. Existing cell segmentation methods are often tailored to specific modalities or require manual interventions to specify hyper-parameters in different experimental settings. Here, we present a multimodality cell segmentation benchmark, comprising more than 1,500 labeled images derived from more than 50 diverse biological experiments. The top participants developed a Transformer-based deep-learning algorithm that not only exceeds existing methods but can also be applied to diverse microscopy images across imaging platforms and tissue types without manual parameter adjustments. This benchmark and the improved algorithm offer promising avenues for more accurate and versatile cell analysis in microscopy imaging.
Collapse
Affiliation(s)
- Jun Ma
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Ronald Xie
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Molecular Genetics, University of Toronto, Toronto, Ontario, Canada
| | - Shamini Ayyadhury
- Donnelly Centre, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
| | - Cheng Ge
- School of Medicine and Pharmacy, Ocean University of China, Qingdao, China
| | - Anubha Gupta
- Department of Electronics and Communications Engineering, Indraprastha Institute of Information Technology Delhi (IIITD), New Delhi, India
| | - Ritu Gupta
- Laboratory Oncology Unit, Dr. BRAIRCH, All India Institute of Medical Sciences, New Delhi, India
| | - Song Gu
- Department of Image Reconstruction, Nanjing Anke Medical Technology Co., Nanjing, China
| | - Yao Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Gihun Lee
- Graduate School of AI, KAIST, Seoul, South Korea
| | - Joonkee Kim
- Graduate School of AI, KAIST, Seoul, South Korea
| | - Wei Lou
- Shenzhen Research Institute of Big Data, Shenzhen, China
- Chinese University of Hong Kong (Shenzhen), Shenzhen, China
| | - Haofeng Li
- Shenzhen Research Institute of Big Data, Shenzhen, China
| | - Eric Upschulte
- Institute of Neuroscience and Medicine (INM-1) and Helmholtz AI, Research Center Jülich, Jülich, Germany
| | - Timo Dickscheid
- Institute of Neuroscience and Medicine (INM-1) and Helmholtz AI, Research Center Jülich, Jülich, Germany
- Faculty of Mathematics and Natural Sciences - Institute of Computer Science, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - José Guilherme de Almeida
- European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Hinxton, UK
- Champalimaud Foundation - Centre for the Unknown, Lisbon, Portugal
| | - Yixin Wang
- Department of Bioengineering, Stanford University, Palo Alto, CA, USA
| | - Lin Han
- Tandon School of Engineering, New York University, New York, NY, USA
| | - Xin Yang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Marco Labagnara
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Vojislav Gligorovski
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Maxime Scheder
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Sahand Jamal Rahi
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Carly Kempster
- School of Biological Sciences, University of Reading, Reading, UK
| | - Alice Pollitt
- School of Biological Sciences, University of Reading, Reading, UK
| | - Leon Espinosa
- Laboratoire de Chimie Bactérienne, CNRS-Université Aix-Marseille UMR, Institut de Microbiologie de la Méditerranée, Marseille, France
| | - Tâm Mignot
- Laboratoire de Chimie Bactérienne, CNRS-Université Aix-Marseille UMR, Institut de Microbiologie de la Méditerranée, Marseille, France
| | - Jan Moritz Middeke
- Department of Internal Medicine I, University Hospital Dresden, Technical University Dresden, Dresden, Germany
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Jan-Niklas Eckardt
- Department of Internal Medicine I, University Hospital Dresden, Technical University Dresden, Dresden, Germany
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Wangkai Li
- Department of Automation, University of Science and Technology of China, Hefei, China
| | - Zhaoyang Li
- Institute of Advanced Technology, University of Science and Technology of China, Hefei, China
| | - Xiaochen Cai
- Department of Computer Science and Technology, Nanjing University, Nanjing, China
| | - Bizhe Bai
- School of EECS, The University of Queensland, Brisbane, Queensland, Australia
| | | | - David Van Valen
- Division of Computing and Mathematical Science, Caltech, Pasadena, CA, USA
- Howard Hughes Medical Institute, Chevy Chase, MD, USA
| | - Erin Weisbart
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Beth A Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Trevor Cheung
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Department of Computer Science, University of Waterloo, Waterloo, Ontario, Canada
| | - Oscar Brück
- Hematoscope Laboratory, Comprehensive Cancer Center & Center of Diagnostics, Helsinki University Hospital, Helsinki, Finland
- Department of Oncology, University of Helsinki, Helsinki, Finland
| | - Gary D Bader
- Department of Molecular Genetics, University of Toronto, Toronto, Ontario, Canada
- Donnelly Centre, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Lunenfeld-Tanenbaum Research Institute, Sinai Health System, Toronto, Ontario, Canada
- CIFAR Multiscale Human Program, CIFAR, Toronto, Ontario, Canada
| | - Bo Wang
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada.
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada.
- Vector Institute, Toronto, Ontario, Canada.
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada.
- UHN AI Hub, University Health Network, Toronto, Ontario, Canada.
| |
Collapse
|
6
|
Siami M, Barszcz T, Wodecki J, Zimroz R. Semantic segmentation of thermal defects in belt conveyor idlers using thermal image augmentation and U-Net-based convolutional neural networks. Sci Rep 2024; 14:5748. [PMID: 38459162 PMCID: PMC10923815 DOI: 10.1038/s41598-024-55864-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 02/28/2024] [Indexed: 03/10/2024] Open
Abstract
The belt conveyor (BC) is the main means of horizontal transportation of bulk materials at mining sites. The sudden fault in BC modules may cause unexpected stops in production lines. With the increasing number of applications of inspection mobile robots in condition monitoring (CM) of industrial infrastructure in hazardous environments, in this article we introduce an image processing pipeline for automatic segmentation of thermal defects in thermal images captured from BC idlers using a mobile robot. This study follows the fact that CM of idler temperature is an important task for preventing sudden breakdowns in BC system networks. We compared the performance of three different types of U-Net-based convolutional neural network architectures for the identification of thermal anomalies using a small number of hand-labeled thermal images. Experiments on the test data set showed that the attention residual U-Net with binary cross entropy as the loss function handled the semantic segmentation problem better than our previous research and other studied U-Net variations.
Collapse
Affiliation(s)
- Mohammad Siami
- AMC Vibro Sp. z o.o., Pilotow 2e, 31-462, Kraków, Poland.
| | - Tomasz Barszcz
- Faculty of Mechanical Engineering and Robotics, AGH University, Al. Mickiewicza 30, 30-059, Kraków, Poland
| | - Jacek Wodecki
- Faculty of Geoengineering, Mining and Geology, Wroclaw University of Science and Technology, Na Grobli 15, 50-421, Wroclaw, Poland
| | - Radoslaw Zimroz
- Faculty of Geoengineering, Mining and Geology, Wroclaw University of Science and Technology, Na Grobli 15, 50-421, Wroclaw, Poland
| |
Collapse
|
7
|
|
8
|
Reinke A, Tizabi MD, Baumgartner M, Eisenmann M, Heckmann-Nötzel D, Kavur AE, Rädsch T, Sudre CH, Acion L, Antonelli M, Arbel T, Bakas S, Benis A, Buettner F, Cardoso MJ, Cheplygina V, Chen J, Christodoulou E, Cimini BA, Farahani K, Ferrer L, Galdran A, van Ginneken B, Glocker B, Godau P, Hashimoto DA, Hoffman MM, Huisman M, Isensee F, Jannin P, Kahn CE, Kainmueller D, Kainz B, Karargyris A, Kleesiek J, Kofler F, Kooi T, Kopp-Schneider A, Kozubek M, Kreshuk A, Kurc T, Landman BA, Litjens G, Madani A, Maier-Hein K, Martel AL, Meijering E, Menze B, Moons KGM, Müller H, Nichyporuk B, Nickel F, Petersen J, Rafelski SM, Rajpoot N, Reyes M, Riegler MA, Rieke N, Saez-Rodriguez J, Sánchez CI, Shetty S, Summers RM, Taha AA, Tiulpin A, Tsaftaris SA, Van Calster B, Varoquaux G, Yaniv ZR, Jäger PF, Maier-Hein L. Understanding metric-related pitfalls in image analysis validation. Nat Methods 2024; 21:182-194. [PMID: 38347140 DOI: 10.1038/s41592-023-02150-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 12/12/2023] [Indexed: 02/15/2024]
Abstract
Validation metrics are key for tracking scientific progress and bridging the current chasm between artificial intelligence research and its translation into practice. However, increasing evidence shows that, particularly in image analysis, metrics are often chosen inadequately. Although taking into account the individual strengths, weaknesses and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multistage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides a reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Although focused on biomedical image analysis, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. The work serves to enhance global comprehension of a key topic in image analysis validation.
Collapse
Affiliation(s)
- Annika Reinke
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
| | - Minu D Tizabi
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany.
| | - Michael Baumgartner
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
| | - Matthias Eisenmann
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Doreen Heckmann-Nötzel
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - A Emre Kavur
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Applied Computer Vision Lab, Heidelberg, Germany
| | - Tim Rädsch
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany
| | - Carole H Sudre
- MRC Unit for Lifelong Health and Ageing at UCL and Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
| | - Laura Acion
- Instituto de Cálculo, CONICET - Universidad de Buenos Aires, Buenos Aires, Argentina
| | - Michela Antonelli
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
- Centre for Medical Image Computing, University College London, London, UK
| | - Tal Arbel
- Centre for Intelligent Machines and MILA (Quebec Artificial Intelligence Institute), McGill University, Montréal, Quebec, Canada
| | - Spyridon Bakas
- Division of Computational Pathology, Dept of Pathology & Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, USA
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Arriel Benis
- Department of Digital Medical Technologies, Holon Institute of Technology, Holon, Israel
- European Federation for Medical Informatics, Le Mont-sur-Lausanne, Switzerland
| | - Florian Buettner
- German Cancer Consortium (DKTK), partner site Frankfurt/Mainz, a partnership between DKFZ and UCT Frankfurt-Marburg, Frankfurt am Main, Germany
- German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Goethe University Frankfurt, Department of Medicine, Frankfurt am Main, Germany
- Goethe University Frankfurt, Department of Informatics, Frankfurt am Main, Germany
- Frankfurt Cancer Insititute, Frankfurt am Main, Germany
| | - M Jorge Cardoso
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
| | - Veronika Cheplygina
- Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | - Jianxu Chen
- Leibniz-Institut für Analytische Wissenschaften - ISAS - e.V., Dortmund, Germany
| | - Evangelia Christodoulou
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Beth A Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute, Bethesda, MD, USA
| | - Luciana Ferrer
- Instituto de Investigación en Ciencias de la Computación (ICC), CONICET-UBA, Ciudad Autónoma de Buenos Aires, Buenos Aires, Argentina
| | - Adrian Galdran
- Universitat Pompeu Fabra, Barcelona, Spain
- University of Adelaide, Adelaide, South Australia, Australia
| | - Bram van Ginneken
- Fraunhofer MEVIS, Bremen, Germany
- Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Ben Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - Patrick Godau
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - Daniel A Hashimoto
- Department of Surgery, Perelman School of Medicine, Philadelphia, PA, USA
- General Robotics Automation Sensing and Perception Laboratory, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Michael M Hoffman
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
| | - Merel Huisman
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Fabian Isensee
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Applied Computer Vision Lab, Heidelberg, Germany
| | - Pierre Jannin
- Laboratoire Traitement du Signal et de l'Image - UMR_S 1099, Université de Rennes 1, Rennes, France
- INSERM, Paris, France
| | - Charles E Kahn
- Department of Radiology and Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | - Dagmar Kainmueller
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Biomedical Image Analysis and HI Helmholtz Imaging, Berlin, Germany
- University of Potsdam, Digital Engineering Faculty, Potsdam, Germany
| | - Bernhard Kainz
- Department of Computing, Faculty of Engineering, Imperial College London, London, UK
- Department AIBE, Friedrich-Alexander-Universität (FAU), Erlangen-Nürnberg, Germany
| | | | - Jens Kleesiek
- Translational Image-guided Oncology (TIO), Institute for AI in Medicine (IKIM), University Medicine Essen, Essen, Germany
| | | | | | - Annette Kopp-Schneider
- German Cancer Research Center (DKFZ) Heidelberg, Division of Biostatistics, Heidelberg, Germany
| | - Michal Kozubek
- Centre for Biomedical Image Analysis and Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Anna Kreshuk
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Health Science Center, Stony Brook, NY, USA
| | | | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Amin Madani
- Department of Surgery, University Health Network, Philadelphia, PA, USA
| | - Klaus Maier-Hein
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Anne L Martel
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Physical Sciences, Sunnybrook Research Institute, Toronto, Ontario, Canada
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, UNSW Sydney, Kensington, New South Wales, Australia
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Karel G M Moons
- Julius Center for Health Sciences and Primary Care, UMC Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland
- Medical Faculty, University of Geneva, Geneva, Switzerland
| | - Brennan Nichyporuk
- MILA (Quebec Artificial Intelligence Institute), Montréal, Quebec, Canada
| | - Felix Nickel
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Jens Petersen
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
| | | | - Nasir Rajpoot
- Tissue Image Analytics Laboratory, Department of Computer Science, University of Warwick, Coventry, UK
| | - Mauricio Reyes
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Radiation Oncology, University Hospital Bern, University of Bern, Bern, Switzerland
| | - Michael A Riegler
- Simula Metropolitan Center for Digital Engineering, Oslo, Norway
- UiT The Arctic University of Norway, Tromsø, Norway
| | | | - Julio Saez-Rodriguez
- Institute for Computational Biomedicine, Heidelberg University, Heidelberg, Germany
- Faculty of Medicine, Heidelberg University Hospital, Heidelberg, Germany
| | - Clara I Sánchez
- Informatics Institute, Faculty of Science, University of Amsterdam, Amsterdam, the Netherlands
| | | | - Ronald M Summers
- National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Abdel A Taha
- Institute of Information Systems Engineering, TU Wien, Vienna, Austria
| | - Aleksei Tiulpin
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland
- Neurocenter Oulu, Oulu University Hospital, Oulu, Finland
| | | | - Ben Van Calster
- Department of Development and Regeneration and EPI-centre, KU Leuven, Leuven, Belgium
- Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, the Netherlands
| | - Gaël Varoquaux
- Parietal project team, INRIA Saclay-Île de France, Palaiseau, France
| | - Ziv R Yaniv
- National Institute of Allergy and Infectious Diseases, Bethesda, MD, USA
| | - Paul F Jäger
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, Interactive Machine Learning Group, Heidelberg, Germany.
| | - Lena Maier-Hein
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany.
- Faculty of Medicine, Heidelberg University Hospital, Heidelberg, Germany.
| |
Collapse
|
9
|
Dhaliwal A, Ma J, Zheng M, Lyu Q, Rajora MA, Ma S, Oliva L, Ku A, Valic M, Wang B, Zheng G. Deep learning for automatic organ and tumor segmentation in nanomedicine pharmacokinetics. Theranostics 2024; 14:973-987. [PMID: 38250039 PMCID: PMC10797295 DOI: 10.7150/thno.90246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 11/17/2023] [Indexed: 01/23/2024] Open
Abstract
Rationale: Multimodal imaging provides important pharmacokinetic and dosimetry information during nanomedicine development and optimization. However, accurate quantitation is time-consuming, resource intensive, and requires anatomical expertise. Methods: We present NanoMASK: a 3D U-Net adapted deep learning tool capable of rapid, automatic organ segmentation of multimodal imaging data that can output key clinical dosimetry metrics without manual intervention. This model was trained on 355 manually-contoured PET/CT data volumes of mice injected with a variety of nanomaterials and imaged over 48 hours. Results: NanoMASK produced 3-dimensional contours of the heart, lungs, liver, spleen, kidneys, and tumor with high volumetric accuracy (pan-organ average %DSC of 92.5). Pharmacokinetic metrics including %ID/cc, %ID, and SUVmax achieved correlation coefficients exceeding R = 0.987 and relative mean errors below 0.2%. NanoMASK was applied to novel datasets of lipid nanoparticles and antibody-drug conjugates with a minimal drop in accuracy, illustrating its generalizability to different classes of nanomedicines. Furthermore, 20 additional auto-segmentation models were developed using training data subsets based on image modality, experimental imaging timepoint, and tumor status. These were used to explore the fundamental biases and dependencies of auto-segmentation models built on a 3D U-Net architecture, revealing significant differential impacts on organ segmentation accuracy. Conclusions: NanoMASK is an easy-to-use, adaptable tool for improving accuracy and throughput in imaging-based pharmacokinetic studies of nanomedicine. It has been made publicly available to all readers for automatic segmentation and pharmacokinetic analysis across a diverse array of nanoparticles, expediting agent development.
Collapse
Affiliation(s)
- Alex Dhaliwal
- Princess Margaret Cancer Centre, University Health Network, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
| | - Jun Ma
- Department of Laboratory Medicine and Pathobiology, University of Toronto, 1 King's College Circle, Toronto, M5S 1A8, Ontario, Canada
- Peter Munk Cardiac Centre, University Health Network, 190 Elizabeth St, Toronto, M5G 2C4, Ontario, Canada
- Vector Institute for Artificial Intelligence, 661 University Avenue, Toronto, M4G 1M1, Ontario, Canada
| | - Mark Zheng
- Princess Margaret Cancer Centre, University Health Network, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
| | - Qing Lyu
- Department of Computer Science, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
| | - Maneesha A. Rajora
- Princess Margaret Cancer Centre, University Health Network, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Institute of Biomedical Engineering, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
| | - Shihao Ma
- Department of Computer Science, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Vector Institute for Artificial Intelligence, 661 University Avenue, Toronto, M4G 1M1, Ontario, Canada
| | - Laura Oliva
- Techna Institute, University Health Network, 190 Elizabeth Street, Toronto, M5G 2C4, Ontario, Canada
| | - Anthony Ku
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, 94305-5484, California, United States of America
| | - Michael Valic
- Princess Margaret Cancer Centre, University Health Network, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Institute of Biomedical Engineering, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
| | - Bo Wang
- Department of Laboratory Medicine and Pathobiology, University of Toronto, 1 King's College Circle, Toronto, M5S 1A8, Ontario, Canada
- Peter Munk Cardiac Centre, University Health Network, 190 Elizabeth St, Toronto, M5G 2C4, Ontario, Canada
- Department of Computer Science, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Vector Institute for Artificial Intelligence, 661 University Avenue, Toronto, M4G 1M1, Ontario, Canada
| | - Gang Zheng
- Princess Margaret Cancer Centre, University Health Network, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, 101 College Street, Toronto, M5G 1L7, Ontario, Canada
- Peter Munk Cardiac Centre, University Health Network, 190 Elizabeth St, Toronto, M5G 2C4, Ontario, Canada
| |
Collapse
|
10
|
Li W, Partridge SC, Newitt DC, Steingrimsson J, Marques HS, Bolan PJ, Hirano M, Bearce BA, Kalpathy-Cramer J, Boss MA, Teng X, Zhang J, Cai J, Kontos D, Cohen EA, Mankowski WC, Liu M, Ha R, Pellicer-Valero OJ, Maier-Hein K, Rabinovici-Cohen S, Tlusty T, Ozery-Flato M, Parekh VS, Jacobs MA, Yan R, Sung K, Kazerouni AS, DiCarlo JC, Yankeelov TE, Chenevert TL, Hylton NM. Breast Multiparametric MRI for Prediction of Neoadjuvant Chemotherapy Response in Breast Cancer: The BMMR2 Challenge. Radiol Imaging Cancer 2024; 6:e230033. [PMID: 38180338 PMCID: PMC10825718 DOI: 10.1148/rycan.230033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 09/13/2023] [Accepted: 11/02/2023] [Indexed: 01/06/2024]
Abstract
Purpose To describe the design, conduct, and results of the Breast Multiparametric MRI for prediction of neoadjuvant chemotherapy Response (BMMR2) challenge. Materials and Methods The BMMR2 computational challenge opened on May 28, 2021, and closed on December 21, 2021. The goal of the challenge was to identify image-based markers derived from multiparametric breast MRI, including diffusion-weighted imaging (DWI) and dynamic contrast-enhanced (DCE) MRI, along with clinical data for predicting pathologic complete response (pCR) following neoadjuvant treatment. Data included 573 breast MRI studies from 191 women (mean age [±SD], 48.9 years ± 10.56) in the I-SPY 2/American College of Radiology Imaging Network (ACRIN) 6698 trial (ClinicalTrials.gov: NCT01042379). The challenge cohort was split into training (60%) and test (40%) sets, with teams blinded to test set pCR outcomes. Prediction performance was evaluated by area under the receiver operating characteristic curve (AUC) and compared with the benchmark established from the ACRIN 6698 primary analysis. Results Eight teams submitted final predictions. Entries from three teams had point estimators of AUC that were higher than the benchmark performance (AUC, 0.782 [95% CI: 0.670, 0.893], with AUCs of 0.803 [95% CI: 0.702, 0.904], 0.838 [95% CI: 0.748, 0.928], and 0.840 [95% CI: 0.748, 0.932]). A variety of approaches were used, ranging from extraction of individual features to deep learning and artificial intelligence methods, incorporating DCE and DWI alone or in combination. Conclusion The BMMR2 challenge identified several models with high predictive performance, which may further expand the value of multiparametric breast MRI as an early marker of treatment response. Clinical trial registration no. NCT01042379 Keywords: MRI, Breast, Tumor Response Supplemental material is available for this article. © RSNA, 2024.
Collapse
Affiliation(s)
- Wen Li
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Savannah C. Partridge
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - David C. Newitt
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Jon Steingrimsson
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Helga S. Marques
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Patrick J. Bolan
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Michael Hirano
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Benjamin Aaron Bearce
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Jayashree Kalpathy-Cramer
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Michael A. Boss
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Xinzhi Teng
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Jiang Zhang
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Jing Cai
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Despina Kontos
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Eric A. Cohen
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Walter C. Mankowski
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Michael Liu
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Richard Ha
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Oscar J. Pellicer-Valero
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Klaus Maier-Hein
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Simona Rabinovici-Cohen
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Tal Tlusty
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Michal Ozery-Flato
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Vishwa S. Parekh
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Michael A. Jacobs
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Ran Yan
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Kyunghyun Sung
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Anum S. Kazerouni
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Julie C. DiCarlo
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Thomas E. Yankeelov
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Thomas L. Chenevert
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| | - Nola M. Hylton
- From the Department of Radiology & Biomedical Imaging,
University of California San Francisco, San Francisco, Calif (W.L., D.C.N.,
N.M.H.); Department of Radiology, University of Washington, Fred Hutchinson
Cancer Center, 1100 Fairview Ave N, Seattle, WA 98109 (S.C.P., M.H., A.S.K.);
Center for Statistical Sciences, Brown University, Providence, RI (J.S.,
H.S.M.); Center for Magnetic Resonance Research, University of Minnesota,
Minneapolis, Minn (P.J.B.); Athinoula A. Martinos Center for Biomedical Imaging,
Harvard University, Charlestown, Mass (B.A.B., J.K.C.); Center for Research and
Innovation, American College of Radiology, Philadelphia, Pa (M.A.B.); Department
of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung
Hom, Kowloon, Hong Kong SAR (X.T., J.Z., J.C.); Department of Radiology,
University of Pennsylvania, Philadelphia, Pa (D.K., E.A.C., W.C.M.); Department
of Radiology, Columbia University Medical Center, New York, NY (M.L., R.H.);
Division of Medical Image Computing, German Cancer Research Center, Heidelberg,
Germany (O.J.P.V., K.M.H.); Department of Radiation Oncology, Heidelberg
University Hospital, Heidelberg, Germany (K.M.H.); IBM Research-Israel, Haifa
University Campus, Mount Carmel, Haifa, Israel (S.R.C., T.T., M.O.F.);
University of Maryland Medical Intelligent Imaging (UM2ii) Center and Department
of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of
Medicine, Baltimore, Md (V.S.P.); The Russell H. Morgan Department of Radiology
and Radiological Science, The Johns Hopkins School of Medicine, Sidney Kimmel
Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md
(V.S.P., M.A.J.); Department of Diagnostic and Interventional Imaging, UT Health
at Houston, Houston, Tex (M.A.J.); Department of Radiological Sciences, David
Geffen School of Medicine, University of California, Los Angeles, Calif (R.Y.,
K.S.); Department of Bioengineering, Henry Samueli School of Engineering,
University of California, Los Angeles, Calif (R.Y., K.S.); Livestrong Cancer
Institutes (J.C.D., T.E.Y.), Departments of Biomedical Engineering, Diagnostic
Medicine, and Oncology (T.E.Y.), and The Oden Institute for Computational
Engineering and Sciences, The University of Texas at Austin, Austin, Tex
(J.C.D., T.E.Y.); Department of Imaging Physics, The University of Texas MD
Anderson Cancer Center, Houston, Tex (T.E.Y.); and Department of Radiology,
University of Michigan, Ann Arbor, Mich (T.L.C.)
| |
Collapse
|
11
|
Schacherer DP, Herrmann MD, Clunie DA, Höfener H, Clifford W, Longabaugh WJR, Pieper S, Kikinis R, Fedorov A, Homeyer A. The NCI Imaging Data Commons as a platform for reproducible research in computational pathology. Comput Methods Programs Biomed 2023; 242:107839. [PMID: 37832430 PMCID: PMC10841477 DOI: 10.1016/j.cmpb.2023.107839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Revised: 09/20/2023] [Accepted: 10/01/2023] [Indexed: 10/15/2023]
Abstract
BACKGROUND AND OBJECTIVES Reproducibility is a major challenge in developing machine learning (ML)-based solutions in computational pathology (CompPath). The NCI Imaging Data Commons (IDC) provides >120 cancer image collections according to the FAIR principles and is designed to be used with cloud ML services. Here, we explore its potential to facilitate reproducibility in CompPath research. METHODS Using the IDC, we implemented two experiments in which a representative ML-based method for classifying lung tumor tissue was trained and/or evaluated on different datasets. To assess reproducibility, the experiments were run multiple times with separate but identically configured instances of common ML services. RESULTS The results of different runs of the same experiment were reproducible to a large extent. However, we observed occasional, small variations in AUC values, indicating a practical limit to reproducibility. CONCLUSIONS We conclude that the IDC facilitates approaching the reproducibility limit of CompPath research (i) by enabling researchers to reuse exactly the same datasets and (ii) by integrating with cloud ML services so that experiments can be run in identically configured computing environments.
Collapse
Affiliation(s)
- Daniela P Schacherer
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359 Bremen, Germany
| | - Markus D Herrmann
- Department of Pathology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | | | - Henning Höfener
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359 Bremen, Germany
| | | | | | | | - Ron Kikinis
- Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA
| | - Andrey Fedorov
- Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA
| | - André Homeyer
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Straße 2, 28359 Bremen, Germany.
| |
Collapse
|
12
|
Andrearczyk V, Oreiller V, Boughdad S, Le Rest CC, Tankyevych O, Elhalawani H, Jreige M, Prior JO, Vallières M, Visvikis D, Hatt M, Depeursinge A. Automatic Head and Neck Tumor segmentation and outcome prediction relying on FDG-PET/CT images: Findings from the second edition of the HECKTOR challenge. Med Image Anal 2023; 90:102972. [PMID: 37742374 DOI: 10.1016/j.media.2023.102972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 07/27/2023] [Accepted: 09/14/2023] [Indexed: 09/26/2023]
Abstract
By focusing on metabolic and morphological tissue properties respectively, FluoroDeoxyGlucose (FDG)-Positron Emission Tomography (PET) and Computed Tomography (CT) modalities include complementary and synergistic information for cancerous lesion delineation and characterization (e.g. for outcome prediction), in addition to usual clinical variables. This is especially true in Head and Neck Cancer (HNC). The goal of the HEad and neCK TumOR segmentation and outcome prediction (HECKTOR) challenge was to develop and compare modern image analysis methods to best extract and leverage this information automatically. We present here the post-analysis of HECKTOR 2nd edition, at the 24th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2021. The scope of the challenge was substantially expanded compared to the first edition, by providing a larger population (adding patients from a new clinical center) and proposing an additional task to the challengers, namely the prediction of Progression-Free Survival (PFS). To this end, the participants were given access to a training set of 224 cases from 5 different centers, each with a pre-treatment FDG-PET/CT scan and clinical variables. Their methods were subsequently evaluated on a held-out test set of 101 cases from two centers. For the segmentation task (Task 1), the ranking was based on a Borda counting of their ranks according to two metrics: mean Dice Similarity Coefficient (DSC) and median Hausdorff Distance at 95th percentile (HD95). For the PFS prediction task, challengers could use the tumor contours provided by experts (Task 3) or rely on their own (Task 2). The ranking was obtained according to the Concordance index (C-index) calculated on the predicted risk scores. A total of 103 teams registered for the challenge, for a total of 448 submissions and 29 papers. The best method in the segmentation task obtained an average DSC of 0.759, and the best predictions of PFS obtained a C-index of 0.717 (without relying on the provided contours) and 0.698 (using the expert contours). An interesting finding was that best PFS predictions were reached by relying on DL approaches (with or without explicit tumor segmentation, 4 out of the 5 best ranked) compared to standard radiomics methods using handcrafted features extracted from delineated tumors, and by exploiting alternative tumor contours (automated and/or larger volumes encompassing surrounding tissues) rather than relying on the expert contours. This second edition of the challenge confirmed the promising performance of fully automated primary tumor delineation in PET/CT images of HNC patients, although there is still a margin for improvement in some difficult cases. For the first time, the prediction of outcome was also addressed and the best methods reached relatively good performance (C-index above 0.7). Both results constitute another step forward toward large-scale outcome prediction studies in HNC.
Collapse
Affiliation(s)
- Vincent Andrearczyk
- Institute of Informatics, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland.
| | - Valentin Oreiller
- Institute of Informatics, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland; Faculty of Biology and Medicine, University of Lausanne, Lausanne, Switzerland
| | - Sarah Boughdad
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Catherine Cheze Le Rest
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France; Poitiers University Hospital, nuclear medicine, Poitiers, France
| | - Olena Tankyevych
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France; Poitiers University Hospital, nuclear medicine, Poitiers, France
| | - Hesham Elhalawani
- Cleveland Clinic Foundation, Department of Radiation Oncology, Cleveland, OH, United States of America
| | - Mario Jreige
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - John O Prior
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland; Faculty of Biology and Medicine, University of Lausanne, Lausanne, Switzerland
| | - Martin Vallières
- Department of Computer Science, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | | | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Adrien Depeursinge
- Institute of Informatics, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| |
Collapse
|
13
|
Wolf D, Payer T, Lisson CS, Lisson CG, Beer M, Götz M, Ropinski T. Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging. Sci Rep 2023; 13:20260. [PMID: 37985685 PMCID: PMC10662445 DOI: 10.1038/s41598-023-46433-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 10/31/2023] [Indexed: 11/22/2023] Open
Abstract
Deep learning in medical imaging has the potential to minimize the risk of diagnostic errors, reduce radiologist workload, and accelerate diagnosis. Training such deep learning models requires large and accurate datasets, with annotations for all training samples. However, in the medical imaging domain, annotated datasets for specific tasks are often small due to the high complexity of annotations, limited access, or the rarity of diseases. To address this challenge, deep learning models can be pre-trained on large image datasets without annotations using methods from the field of self-supervised learning. After pre-training, small annotated datasets are sufficient to fine-tune the models for a specific task. The most popular self-supervised pre-training approaches in medical imaging are based on contrastive learning. However, recent studies in natural image processing indicate a strong potential for masked autoencoder approaches. Our work compares state-of-the-art contrastive learning methods with the recently introduced masked autoencoder approach "SparK" for convolutional neural networks (CNNs) on medical images. Therefore, we pre-train on a large unannotated CT image dataset and fine-tune on several CT classification tasks. Due to the challenge of obtaining sufficient annotated training data in medical imaging, it is of particular interest to evaluate how the self-supervised pre-training methods perform when fine-tuning on small datasets. By experimenting with gradually reducing the training dataset size for fine-tuning, we find that the reduction has different effects depending on the type of pre-training chosen. The SparK pre-training method is more robust to the training dataset size than the contrastive methods. Based on our results, we propose the SparK pre-training for medical imaging tasks with only small annotated datasets.
Collapse
Affiliation(s)
- Daniel Wolf
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, Ulm, Germany.
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany.
| | - Tristan Payer
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, Ulm, Germany
| | - Catharina Silvia Lisson
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany
| | - Christoph Gerhard Lisson
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany
| | - Meinrad Beer
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany
| | - Michael Götz
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany
| | - Timo Ropinski
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, Ulm, Germany
| |
Collapse
|
14
|
Burkert N, Roy S, Häusler M, Wuttke D, Müller S, Wiemer J, Hollmann H, Oldrati M, Ramirez-Franco J, Benkert J, Fauler M, Duda J, Goaillard JM, Pötschke C, Münchmeyer M, Parlato R, Liss B. Deep learning-based image analysis identifies a DAT-negative subpopulation of dopaminergic neurons in the lateral Substantia nigra. Commun Biol 2023; 6:1146. [PMID: 37950046 PMCID: PMC10638391 DOI: 10.1038/s42003-023-05441-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Accepted: 10/10/2023] [Indexed: 11/12/2023] Open
Abstract
Here we present a deep learning-based image analysis platform (DLAP), tailored to autonomously quantify cell numbers, and fluorescence signals within cellular compartments, derived from RNAscope or immunohistochemistry. We utilised DLAP to analyse subtypes of tyrosine hydroxylase (TH)-positive dopaminergic midbrain neurons in mouse and human brain-sections. These neurons modulate complex behaviour, and are differentially affected in Parkinson's and other diseases. DLAP allows the analysis of large cell numbers, and facilitates the identification of small cellular subpopulations. Using DLAP, we identified a small subpopulation of TH-positive neurons (~5%), mainly located in the very lateral Substantia nigra (SN), that was immunofluorescence-negative for the plasmalemmal dopamine transporter (DAT), with ~40% smaller cell bodies. These neurons were negative for aldehyde dehydrogenase 1A1, with a lower co-expression rate for dopamine-D2-autoreceptors, but a ~7-fold higher likelihood of calbindin-d28k co-expression (~70%). These results have important implications, as DAT is crucial for dopamine signalling, and is commonly used as a marker for dopaminergic SN neurons.
Collapse
Affiliation(s)
- Nicole Burkert
- Institute of Applied Physiology, Medical Faculty, Ulm University, 89081, Ulm, Germany
| | - Shoumik Roy
- Institute of Applied Physiology, Medical Faculty, Ulm University, 89081, Ulm, Germany.
| | - Max Häusler
- Institute of Applied Physiology, Medical Faculty, Ulm University, 89081, Ulm, Germany
| | | | - Sonja Müller
- Institute of Applied Physiology, Medical Faculty, Ulm University, 89081, Ulm, Germany
| | - Johanna Wiemer
- Institute of Applied Physiology, Medical Faculty, Ulm University, 89081, Ulm, Germany
| | - Helene Hollmann
- Institute of Applied Physiology, Medical Faculty, Ulm University, 89081, Ulm, Germany
| | - Marvin Oldrati
- Institute of Applied Physiology, Medical Faculty, Ulm University, 89081, Ulm, Germany
| | - Jorge Ramirez-Franco
- UMR_S 1072, Aix Marseille Université, INSERM, Faculté de Médecine Secteur Nord, Marseille, France
- INT, Aix Marseille Université, CNRS, Campus Santé Timone, Marseille, France
| | - Julia Benkert
- Institute of Applied Physiology, Medical Faculty, Ulm University, 89081, Ulm, Germany
| | - Michael Fauler
- Institute of General Physiology, Medical Faculty, Ulm University, 89081, Ulm, Germany
| | - Johanna Duda
- Institute of Applied Physiology, Medical Faculty, Ulm University, 89081, Ulm, Germany
| | - Jean-Marc Goaillard
- UMR_S 1072, Aix Marseille Université, INSERM, Faculté de Médecine Secteur Nord, Marseille, France
- INT, Aix Marseille Université, CNRS, Campus Santé Timone, Marseille, France
| | - Christina Pötschke
- Institute of Applied Physiology, Medical Faculty, Ulm University, 89081, Ulm, Germany
| | - Moritz Münchmeyer
- Wolution GmbH & Co. KG, 82152, Munich, Germany
- Department of Physics, University of Wisconsin-Madison, Madison, WI, USA
| | - Rosanna Parlato
- Institute of Applied Physiology, Medical Faculty, Ulm University, 89081, Ulm, Germany
- Division of Neurodegenerative Disorders, Department of Neurology, Medical Faculty Mannheim, Mannheim Center for Translational Neurosciences, Heidelberg University, 68167, Mannheim, Germany
| | - Birgit Liss
- Institute of Applied Physiology, Medical Faculty, Ulm University, 89081, Ulm, Germany.
- Linacre College & New College, Oxford University, OX1 2JD, Oxford, UK.
| |
Collapse
|
15
|
Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Nuklearmedizin 2023; 62:306-313. [PMID: 37802058 DOI: 10.1055/a-2157-6670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET..
Collapse
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Tobias Hepp
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Ferdinand Seith
- Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| |
Collapse
|
16
|
Lösel PD, Monchanin C, Lebrun R, Jayme A, Relle JJ, Devaud JM, Heuveline V, Lihoreau M. Natural variability in bee brain size and symmetry revealed by micro-CT imaging and deep learning. PLoS Comput Biol 2023; 19:e1011529. [PMID: 37782674 PMCID: PMC10569549 DOI: 10.1371/journal.pcbi.1011529] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 10/12/2023] [Accepted: 09/19/2023] [Indexed: 10/04/2023] Open
Abstract
Analysing large numbers of brain samples can reveal minor, but statistically and biologically relevant variations in brain morphology that provide critical insights into animal behaviour, ecology and evolution. So far, however, such analyses have required extensive manual effort, which considerably limits the scope for comparative research. Here we used micro-CT imaging and deep learning to perform automated analyses of 3D image data from 187 honey bee and bumblebee brains. We revealed strong inter-individual variations in total brain size that are consistent across colonies and species, and may underpin behavioural variability central to complex social organisations. In addition, the bumblebee dataset showed a significant level of lateralization in optic and antennal lobes, providing a potential explanation for reported variations in visual and olfactory learning. Our fast, robust and user-friendly approach holds considerable promises for carrying out large-scale quantitative neuroanatomical comparisons across a wider range of animals. Ultimately, this will help address fundamental unresolved questions related to the evolution of animal brains and cognition.
Collapse
Affiliation(s)
- Philipp D. Lösel
- Engineering Mathematics and Computing Lab (EMCL), Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Heidelberg, Germany
- Data Mining and Uncertainty Quantification (DMQ), Heidelberg Institute for Theoretical Studies (HITS), Heidelberg, Germany
- Department of Materials Physics, Research School of Physics, The Australian National University, Canberra, Australia
| | - Coline Monchanin
- Research Center on Animal Cognition (CRCA), Center for Integrative Biology (CBI); CNRS, University Paul Sabatier – Toulouse III, Toulouse, France
- Department of Biological Sciences, Macquarie University, Sydney, Australia
| | - Renaud Lebrun
- Institut des Sciences de l’Evolution de Montpellier, CC64, Université de Montpellier, Montpellier, France
- BioCampus, Montpellier Ressources Imagerie, CNRS, INSERM, Université de Montpellier, Montpellier, France
| | - Alejandra Jayme
- Engineering Mathematics and Computing Lab (EMCL), Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Heidelberg, Germany
- Data Mining and Uncertainty Quantification (DMQ), Heidelberg Institute for Theoretical Studies (HITS), Heidelberg, Germany
| | - Jacob J. Relle
- Engineering Mathematics and Computing Lab (EMCL), Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Heidelberg, Germany
- Data Mining and Uncertainty Quantification (DMQ), Heidelberg Institute for Theoretical Studies (HITS), Heidelberg, Germany
| | - Jean-Marc Devaud
- Research Center on Animal Cognition (CRCA), Center for Integrative Biology (CBI); CNRS, University Paul Sabatier – Toulouse III, Toulouse, France
| | - Vincent Heuveline
- Engineering Mathematics and Computing Lab (EMCL), Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Heidelberg, Germany
- Data Mining and Uncertainty Quantification (DMQ), Heidelberg Institute for Theoretical Studies (HITS), Heidelberg, Germany
- Heidelberg University Computing Centre (URZ), Heidelberg, Germany
| | - Mathieu Lihoreau
- Research Center on Animal Cognition (CRCA), Center for Integrative Biology (CBI); CNRS, University Paul Sabatier – Toulouse III, Toulouse, France
| |
Collapse
|
17
|
Kulik SD, Douw L, van Dellen E, Steenwijk MD, Geurts JJG, Stam CJ, Hillebrand A, Schoonheim MM, Tewarie P. Comparing individual and group-level simulated neurophysiological brain connectivity using the Jansen and Rit neural mass model. Netw Neurosci 2023; 7:950-965. [PMID: 37781149 PMCID: PMC10473283 DOI: 10.1162/netn_a_00303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 12/24/2022] [Indexed: 10/03/2023] Open
Abstract
Computational models are often used to assess how functional connectivity (FC) patterns emerge from neuronal population dynamics and anatomical brain connections. It remains unclear whether the commonly used group-averaged data can predict individual FC patterns. The Jansen and Rit neural mass model was employed, where masses were coupled using individual structural connectivity (SC). Simulated FC was correlated to individual magnetoencephalography-derived empirical FC. FC was estimated using phase-based (phase lag index (PLI), phase locking value (PLV)), and amplitude-based (amplitude envelope correlation (AEC)) metrics to analyze their goodness of fit for individual predictions. Individual FC predictions were compared against group-averaged FC predictions, and we tested whether SC of a different participant could equally well predict participants' FC patterns. The AEC provided a better match between individually simulated and empirical FC than phase-based metrics. Correlations between simulated and empirical FC were higher using individual SC compared to group-averaged SC. Using SC from other participants resulted in similar correlations between simulated and empirical FC compared to using participants' own SC. This work underlines the added value of FC simulations using individual instead of group-averaged SC for this particular computational model and could aid in a better understanding of mechanisms underlying individual functional network trajectories.
Collapse
Affiliation(s)
- S. D. Kulik
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Anatomy & Neuroscience, Amsterdam Neuroscience, Amsterdam The Netherlands
- Amsterdam UMC, Vrije Universiteit Amsterdam, Brain Tumour Center Amsterdam, Amsterdam, The Netherlands
- Amsterdam UMC, Vrije Universiteit Amsterdam, MS Center Amsterdam, Amsterdam, The Netherlands
| | - L. Douw
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Anatomy & Neuroscience, Amsterdam Neuroscience, Amsterdam The Netherlands
- Amsterdam UMC, Vrije Universiteit Amsterdam, Brain Tumour Center Amsterdam, Amsterdam, The Netherlands
| | - E. van Dellen
- University Medical Center Utrecht, Department of Psychiatry, Brain Center, Utrecht, The Netherlands
| | - M. D. Steenwijk
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Anatomy & Neuroscience, Amsterdam Neuroscience, Amsterdam The Netherlands
- Amsterdam UMC, Vrije Universiteit Amsterdam, MS Center Amsterdam, Amsterdam, The Netherlands
| | - J. J. G. Geurts
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Anatomy & Neuroscience, Amsterdam Neuroscience, Amsterdam The Netherlands
- Amsterdam UMC, Vrije Universiteit Amsterdam, MS Center Amsterdam, Amsterdam, The Netherlands
| | - C. J. Stam
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Neurology and Department of Clinical Neurophysiology and MEG Center, Amsterdam Neuroscience, Amsterdam The Netherlands
| | - A. Hillebrand
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Neurology and Department of Clinical Neurophysiology and MEG Center, Amsterdam Neuroscience, Amsterdam The Netherlands
| | - M. M. Schoonheim
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Anatomy & Neuroscience, Amsterdam Neuroscience, Amsterdam The Netherlands
- Amsterdam UMC, Vrije Universiteit Amsterdam, MS Center Amsterdam, Amsterdam, The Netherlands
| | - P. Tewarie
- Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Neurology and Department of Clinical Neurophysiology and MEG Center, Amsterdam Neuroscience, Amsterdam The Netherlands
| |
Collapse
|
18
|
Armato SG, Drukker K, Hadjiiski L. AI in medical imaging grand challenges: translation from competition to research benefit and patient care. Br J Radiol 2023; 96:20221152. [PMID: 37698542 PMCID: PMC10546459 DOI: 10.1259/bjr.20221152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 05/24/2023] [Accepted: 07/11/2023] [Indexed: 09/13/2023] Open
Abstract
Artificial intelligence (AI), in one form or another, has been a part of medical imaging for decades. The recent evolution of AI into approaches such as deep learning has dramatically accelerated the application of AI across a wide range of radiologic settings. Despite the promises of AI, developers and users of AI technology must be fully aware of its potential biases and pitfalls, and this knowledge must be incorporated throughout the AI system development pipeline that involves training, validation, and testing. Grand challenges offer an opportunity to advance the development of AI methods for targeted applications and provide a mechanism for both directing and facilitating the development of AI systems. In the process, a grand challenge centralizes (with the challenge organizers) the burden of providing a valid benchmark test set to assess performance and generalizability of participants' models and the collection and curation of image metadata, clinical/demographic information, and the required reference standard. The most relevant grand challenges are those designed to maximize the open-science nature of the competition, with code and trained models deposited for future public access. The ultimate goal of AI grand challenges is to foster the translation of AI systems from competition to research benefit and patient care. Rather than reference the many medical imaging grand challenges that have been organized by groups such as MICCAI, RSNA, AAPM, and grand-challenge.org, this review assesses the role of grand challenges in promoting AI technologies for research advancement and for eventual clinical implementation, including their promises and limitations.
Collapse
Affiliation(s)
- Samuel G Armato
- Department of Radiology, The University of Chicago, Chicago, Illinois, USA
| | - Karen Drukker
- Department of Radiology, The University of Chicago, Chicago, Illinois, USA
| | - Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| |
Collapse
|
19
|
Petkidis A, Andriasyan V, Greber UF. Machine learning for cross-scale microscopy of viruses. Cell Rep Methods 2023; 3:100557. [PMID: 37751685 PMCID: PMC10545915 DOI: 10.1016/j.crmeth.2023.100557] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 06/05/2023] [Accepted: 07/20/2023] [Indexed: 09/28/2023]
Abstract
Despite advances in virological sciences and antiviral research, viruses continue to emerge, circulate, and threaten public health. We still lack a comprehensive understanding of how cells and individuals remain susceptible to infectious agents. This deficiency is in part due to the complexity of viruses, including the cell states controlling virus-host interactions. Microscopy samples distinct cellular infection stages in a multi-parametric, time-resolved manner at molecular resolution and is increasingly enhanced by machine learning and deep learning. Here we discuss how state-of-the-art artificial intelligence (AI) augments light and electron microscopy and advances virological research of cells. We describe current procedures for image denoising, object segmentation, tracking, classification, and super-resolution and showcase examples of how AI has improved the acquisition and analyses of microscopy data. The power of AI-enhanced microscopy will continue to help unravel virus infection mechanisms, develop antiviral agents, and improve viral vectors.
Collapse
Affiliation(s)
- Anthony Petkidis
- Department of Molecular Life Sciences, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland.
| | - Vardan Andriasyan
- Department of Molecular Life Sciences, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
| | - Urs F Greber
- Department of Molecular Life Sciences, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland.
| |
Collapse
|
20
|
Kok J, Shcherbakova YM, Schlösser TPC, Seevinck PR, van der Velden TA, Castelein RM, Ito K, van Rietbergen B. Automatic generation of subject-specific finite element models of the spine from magnetic resonance images. Front Bioeng Biotechnol 2023; 11:1244291. [PMID: 37731762 PMCID: PMC10508183 DOI: 10.3389/fbioe.2023.1244291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 08/24/2023] [Indexed: 09/22/2023] Open
Abstract
The generation of subject-specific finite element models of the spine is generally a time-consuming process based on computed tomography (CT) images, where scanning exposes subjects to harmful radiation. In this study, a method is presented for the automatic generation of spine finite element models using images from a single magnetic resonance (MR) sequence. The thoracic and lumbar spine of eight adult volunteers was imaged using a 3D multi-echo-gradient-echo sagittal MR sequence. A deep-learning method was used to generate synthetic CT images from the MR images. A pre-trained deep-learning network was used for the automatic segmentation of vertebrae from the synthetic CT images. Another deep-learning network was trained for the automatic segmentation of intervertebral discs from the MR images. The automatic segmentations were validated against manual segmentations for two subjects, one with scoliosis, and another with a spine implant. A template mesh of the spine was registered to the segmentations in three steps using a Bayesian coherent point drift algorithm. First, rigid registration was applied on the complete spine. Second, non-rigid registration was used for the individual discs and vertebrae. Third, the complete spine was non-rigidly registered to the individually registered discs and vertebrae. Comparison of the automatic and manual segmentations led to dice-scores of 0.93-0.96 for all vertebrae and discs. The lowest dice-score was in the disc at the height of the implant where artifacts led to under-segmentation. The mean distance between the morphed meshes and the segmentations was below 1 mm. In conclusion, the presented method can be used to automatically generate accurate subject-specific spine models.
Collapse
Affiliation(s)
- Joeri Kok
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | | | - Tom P. C. Schlösser
- Department of Orthopaedic Surgery, University Medical Center Utrecht, Utrecht, Netherlands
| | - Peter R. Seevinck
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, Netherlands
- MRIguidance BV, Utrecht, Netherlands
| | - Tijl A. van der Velden
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, Netherlands
- MRIguidance BV, Utrecht, Netherlands
| | - René M. Castelein
- Department of Orthopaedic Surgery, University Medical Center Utrecht, Utrecht, Netherlands
| | - Keita Ito
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
- Department of Orthopaedic Surgery, University Medical Center Utrecht, Utrecht, Netherlands
| | - Bert van Rietbergen
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| |
Collapse
|
21
|
Boone L, Biparva M, Mojiri Forooshani P, Ramirez J, Masellis M, Bartha R, Symons S, Strother S, Black SE, Heyn C, Martel AL, Swartz RH, Goubran M. ROOD-MRI: Benchmarking the robustness of deep learning segmentation models to out-of-distribution and corrupted data in MRI. Neuroimage 2023; 278:120289. [PMID: 37495197 DOI: 10.1016/j.neuroimage.2023.120289] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 04/26/2023] [Accepted: 07/20/2023] [Indexed: 07/28/2023] Open
Abstract
Deep artificial neural networks (DNNs) have moved to the forefront of medical image analysis due to their success in classification, segmentation, and detection challenges. A principal challenge in large-scale deployment of DNNs in neuroimage analysis is the potential for shifts in signal-to-noise ratio, contrast, resolution, and presence of artifacts from site to site due to variances in scanners and acquisition protocols. DNNs are famously susceptible to these distribution shifts in computer vision. Currently, there are no benchmarking platforms or frameworks to assess the robustness of new and existing models to specific distribution shifts in MRI, and accessible multi-site benchmarking datasets are still scarce or task-specific. To address these limitations, we propose ROOD-MRI: a novel platform for benchmarking the Robustness of DNNs to Out-Of-Distribution (OOD) data, corruptions, and artifacts in MRI. This flexible platform provides modules for generating benchmarking datasets using transforms that model distribution shifts in MRI, implementations of newly derived benchmarking metrics for image segmentation, and examples for using the methodology with new models and tasks. We apply our methodology to hippocampus, ventricle, and white matter hyperintensity segmentation in several large studies, providing the hippocampus dataset as a publicly available benchmark. By evaluating modern DNNs on these datasets, we demonstrate that they are highly susceptible to distribution shifts and corruptions in MRI. We show that while data augmentation strategies can substantially improve robustness to OOD data for anatomical segmentation tasks, modern DNNs using augmentation still lack robustness in more challenging lesion-based segmentation tasks. We finally benchmark U-Nets and vision transformers, finding robustness susceptibility to particular classes of transforms across architectures. The presented open-source platform enables generating new benchmarking datasets and comparing across models to study model design that results in improved robustness to OOD data and corruptions in MRI.
Collapse
Affiliation(s)
- Lyndon Boone
- Department of Medical Biophysics, University of Toronto, Toronto, Canada; Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, Toronto, Canada; Physical Sciences, Sunnybrook Research Institute, Toronto, Canada.
| | - Mahdi Biparva
- Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, Toronto, Canada; Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Canadian Partnership for Stroke Recovery, Heart and Stroke Foundation, Toronto, Canada
| | - Parisa Mojiri Forooshani
- Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, Toronto, Canada; Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Canadian Partnership for Stroke Recovery, Heart and Stroke Foundation, Toronto, Canada
| | - Joel Ramirez
- Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, Toronto, Canada; Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Canadian Partnership for Stroke Recovery, Heart and Stroke Foundation, Toronto, Canada
| | - Mario Masellis
- Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, Toronto, Canada; Canadian Partnership for Stroke Recovery, Heart and Stroke Foundation, Toronto, Canada; Department of Medicine, University of Toronto, Toronto, Canada
| | - Robert Bartha
- Department of Medical Biophysics, Western University, London, Canada; Robarts Research Institute, Western University, London, Canada
| | - Sean Symons
- Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, Toronto, Canada; Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Imaging, University of Toronto, Toronto, Canada
| | - Stephen Strother
- Department of Medical Biophysics, University of Toronto, Toronto, Canada; Rotman Research Institute, Baycrest, Toronto, Canada
| | - Sandra E Black
- Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, Toronto, Canada; Canadian Partnership for Stroke Recovery, Heart and Stroke Foundation, Toronto, Canada; Department of Medicine, University of Toronto, Toronto, Canada
| | - Chris Heyn
- Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, Toronto, Canada; Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Imaging, University of Toronto, Toronto, Canada
| | - Anne L Martel
- Department of Medical Biophysics, University of Toronto, Toronto, Canada; Physical Sciences, Sunnybrook Research Institute, Toronto, Canada
| | - Richard H Swartz
- Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, Toronto, Canada; Canadian Partnership for Stroke Recovery, Heart and Stroke Foundation, Toronto, Canada; Department of Medicine, University of Toronto, Toronto, Canada
| | - Maged Goubran
- Department of Medical Biophysics, University of Toronto, Toronto, Canada; Hurvitz Brain Sciences Research Program, Sunnybrook Research Institute, Toronto, Canada; Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Canadian Partnership for Stroke Recovery, Heart and Stroke Foundation, Toronto, Canada.
| |
Collapse
|
22
|
Jain Y, Godwin LL, Joshi S, Mandarapu S, Le T, Lindskog C, Lundberg E, Börner K. Segmenting functional tissue units across human organs using community-driven development of generalizable machine learning algorithms. Nat Commun 2023; 14:4656. [PMID: 37537179 PMCID: PMC10400613 DOI: 10.1038/s41467-023-40291-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 07/21/2023] [Indexed: 08/05/2023] Open
Abstract
The development of a reference atlas of the healthy human body requires automated image segmentation of major anatomical structures across multiple organs based on spatial bioimages generated from various sources with differences in sample preparation. We present the setup and results of the Hacking the Human Body machine learning algorithm development competition hosted by the Human Biomolecular Atlas (HuBMAP) and the Human Protein Atlas (HPA) teams on the Kaggle platform. We create a dataset containing 880 histology images with 12,901 segmented structures, engaging 1175 teams from 78 countries in community-driven, open-science development of machine learning models. Tissue variations in the dataset pose a major challenge to the teams which they overcome by using color normalization techniques and combining vision transformers with convolutional models. The best model will be productized in the HuBMAP portal to process tissue image datasets at scale in support of Human Reference Atlas construction.
Collapse
Affiliation(s)
- Yashvardhan Jain
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA.
| | - Leah L Godwin
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA
| | - Sripad Joshi
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA
| | - Shriya Mandarapu
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA
| | - Trang Le
- Science for Life Laboratory, School of Engineering Sciences in Chemistry, Biotechnology and Health, KTH - Royal Institute of Technology, Stockholm, Sweden
- Department of Bioengineering, Stanford University, Stanford, CA, 94305, USA
| | - Cecilia Lindskog
- Department of Immunology, Genetics and Pathology, Division of Cancer Precision Medicine, Uppsala University, Uppsala, Sweden
| | - Emma Lundberg
- Science for Life Laboratory, School of Engineering Sciences in Chemistry, Biotechnology and Health, KTH - Royal Institute of Technology, Stockholm, Sweden
- Department of Bioengineering, Stanford University, Stanford, CA, 94305, USA
- Department of Pathology, Stanford University, Stanford, CA, 94305, USA
- Chan Zuckerberg Biohub, San Francisco, CA, 94305, USA
| | - Katy Börner
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, 47408, USA.
| |
Collapse
|
23
|
Bhandary S, Kuhn D, Babaiee Z, Fechter T, Benndorf M, Zamboglou C, Grosu AL, Grosu R. Investigation and benchmarking of U-Nets on prostate segmentation tasks. Comput Med Imaging Graph 2023; 107:102241. [PMID: 37201475 DOI: 10.1016/j.compmedimag.2023.102241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 05/03/2023] [Accepted: 05/03/2023] [Indexed: 05/20/2023]
Abstract
In healthcare, a growing number of physicians and support staff are striving to facilitate personalized radiotherapy regimens for patients with prostate cancer. This is because individual patient biology is unique, and employing a single approach for all is inefficient. A crucial step for customizing radiotherapy planning and gaining fundamental information about the disease, is the identification and delineation of targeted structures. However, accurate biomedical image segmentation is time-consuming, requires considerable experience and is prone to observer variability. In the past decade, the use of deep learning models has significantly increased in the field of medical image segmentation. At present, a vast number of anatomical structures can be demarcated on a clinician's level with deep learning models. These models would not only unload work, but they can offer unbiased characterization of the disease. The main architectures used in segmentation are the U-Net and its variants, that exhibit outstanding performances. However, reproducing results or directly comparing methods is often limited by closed source of data and the large heterogeneity among medical images. With this in mind, our intention is to provide a reliable source for assessing deep learning models. As an example, we chose the challenging task of delineating the prostate gland in multi-modal images. First, this paper provides a comprehensive review of current state-of-the-art convolutional neural networks for 3D prostate segmentation. Second, utilizing public and in-house CT and MR datasets of varying properties, we created a framework for an objective comparison of automatic prostate segmentation algorithms. The framework was used for rigorous evaluations of the models, highlighting their strengths and weaknesses.
Collapse
Affiliation(s)
- Shrajan Bhandary
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria.
| | - Dejan Kuhn
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany
| | - Zahra Babaiee
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria
| | - Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany
| | - Matthias Benndorf
- Department of Diagnostic and Interventional Radiology, Medical Center University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany
| | - Constantinos Zamboglou
- Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany; Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; German Oncology Center, European University, Limassol, 4108, Cyprus
| | - Anca-Ligia Grosu
- Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany; Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany
| | - Radu Grosu
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria; Department of Computer Science, State University of New York at Stony Brook, NY, 11794, USA
| |
Collapse
|
24
|
Manubens-Gil L, Zhou Z, Chen H, Ramanathan A, Liu X, Liu Y, Bria A, Gillette T, Ruan Z, Yang J, Radojević M, Zhao T, Cheng L, Qu L, Liu S, Bouchard KE, Gu L, Cai W, Ji S, Roysam B, Wang CW, Yu H, Sironi A, Iascone DM, Zhou J, Bas E, Conde-Sousa E, Aguiar P, Li X, Li Y, Nanda S, Wang Y, Muresan L, Fua P, Ye B, He HY, Staiger JF, Peter M, Cox DN, Simonneau M, Oberlaender M, Jefferis G, Ito K, Gonzalez-Bellido P, Kim J, Rubel E, Cline HT, Zeng H, Nern A, Chiang AS, Yao J, Roskams J, Livesey R, Stevens J, Liu T, Dang C, Guo Y, Zhong N, Tourassi G, Hill S, Hawrylycz M, Koch C, Meijering E, Ascoli GA, Peng H. BigNeuron: a resource to benchmark and predict performance of algorithms for automated tracing of neurons in light microscopy datasets. Nat Methods 2023; 20:824-835. [PMID: 37069271 DOI: 10.1038/s41592-023-01848-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 03/14/2023] [Indexed: 04/19/2023]
Abstract
BigNeuron is an open community bench-testing platform with the goal of setting open standards for accurate and fast automatic neuron tracing. We gathered a diverse set of image volumes across several species that is representative of the data obtained in many neuroscience laboratories interested in neuron tracing. Here, we report generated gold standard manual annotations for a subset of the available imaging datasets and quantified tracing quality for 35 automatic tracing algorithms. The goal of generating such a hand-curated diverse dataset is to advance the development of tracing algorithms and enable generalizable benchmarking. Together with image quality features, we pooled the data in an interactive web application that enables users and developers to perform principal component analysis, t-distributed stochastic neighbor embedding, correlation and clustering, visualization of imaging and tracing data, and benchmarking of automatic tracing algorithms in user-defined data subsets. The image quality metrics explain most of the variance in the data, followed by neuromorphological features related to neuron size. We observed that diverse algorithms can provide complementary information to obtain accurate results and developed a method to iteratively combine methods and generate consensus reconstructions. The consensus trees obtained provide estimates of the neuron structure ground truth that typically outperform single algorithms in noisy datasets. However, specific algorithms may outperform the consensus tree strategy in specific imaging conditions. Finally, to aid users in predicting the most accurate automatic tracing results without manual annotations for comparison, we used support vector machine regression to predict reconstruction quality given an image volume and a set of automatic tracings.
Collapse
Affiliation(s)
- Linus Manubens-Gil
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Zhi Zhou
- Microsoft Corporation, Redmond, WA, USA
| | | | - Arvind Ramanathan
- Computing, Environment and Life Sciences Directorate, Argonne National Laboratory, Lemont, IL, USA
| | | | - Yufeng Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | | | - Todd Gillette
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Zongcai Ruan
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Jian Yang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
| | | | - Ting Zhao
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Li Cheng
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Lei Qu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Anhui University, Hefei, China
| | | | - Kristofer E Bouchard
- Scientific Data Division and Biological Systems and Engineering Division, Lawrence Berkeley National Lab, Berkeley, CA, USA
- Helen Wills Neuroscience Institute and Redwood Center for Theoretical Neuroscience, UC Berkeley, Berkeley, CA, USA
| | - Lin Gu
- RIKEN AIP, Tokyo, Japan
- Research Center for Advanced Science and Technology (RCAST), The University of Tokyo, Tokyo, Japan
| | - Weidong Cai
- School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | - Shuiwang Ji
- Texas A&M University, College Station, TX, USA
| | - Badrinath Roysam
- Cullen College of Engineering, University of Houston, Houston, TX, USA
| | - Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Hongchuan Yu
- National Centre for Computer Animation, Bournemouth University, Poole, UK
| | | | - Daniel Maxim Iascone
- Department of Neuroscience, Columbia University, New York, NY, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Jie Zhou
- Department of Computer Science, Northern Illinois University, DeKalb, IL, USA
| | | | - Eduardo Conde-Sousa
- i3S, Instituto de Investigação E Inovação Em Saúde, Universidade Do Porto, Porto, Portugal
- INEB, Instituto de Engenharia Biomédica, Universidade Do Porto, Porto, Portugal
| | - Paulo Aguiar
- i3S, Instituto de Investigação E Inovação Em Saúde, Universidade Do Porto, Porto, Portugal
| | - Xiang Li
- Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Yujie Li
- Allen Institute for Brain Science, Seattle, WA, USA
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Sumit Nanda
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Yuan Wang
- Program in Neuroscience, Department of Biomedical Sciences, Florida State University College of Medicine, Tallahassee, FL, USA
| | - Leila Muresan
- Cambridge Advanced Imaging Centre, University of Cambridge, Cambridge, UK
| | - Pascal Fua
- Computer Vision Laboratory, EPFL, Lausanne, Switzerland
| | - Bing Ye
- Life Sciences Institute and Department of Cell and Developmental Biology, University of Michigan, Ann Arbor, MI, USA
| | - Hai-Yan He
- Department of Biology, Georgetown University, Washington, DC, USA
| | - Jochen F Staiger
- Institute for Neuroanatomy, University Medical Center Göttingen, Georg-August- University Göttingen, Goettingen, Germany
| | - Manuel Peter
- Department of Stem Cell and Regenerative Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Daniel N Cox
- Neuroscience Institute, Georgia State University, Atlanta, GA, USA
| | - Michel Simonneau
- 42 ENS Paris-Saclay, CNRS, CentraleSupélec, LuMIn, Université Paris-Saclay, Gif-sur-Yvette, France
| | - Marcel Oberlaender
- Max Planck Group: In Silico Brain Sciences, Max Planck Institute for Neurobiology of Behavior - caesar, Bonn, Germany
| | - Gregory Jefferis
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Division of Neurobiology, MRC Laboratory of Molecular Biology, Cambridge, UK
- Department of Zoology, University of Cambridge, Cambridge, UK
| | - Kei Ito
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Institute for Quantitative Biosciences, University of Tokyo, Tokyo, Japan
- Institute of Zoology, Biocenter Cologne, University of Cologne, Cologne, Germany
| | | | - Jinhyun Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea
| | - Edwin Rubel
- Virginia Merrill Bloedel Hearing Research Center, University of Washington, Seattle, WA, USA
| | | | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Aljoscha Nern
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Ann-Shyn Chiang
- Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan
| | | | - Jane Roskams
- Allen Institute for Brain Science, Seattle, WA, USA
- Department of Zoology, Life Sciences Institute, University of British Columbia, Vancouver, British Columbia, Canada
| | - Rick Livesey
- Zayed Centre for Rare Disease Research, UCL Great Ormond Street Institute of Child Health, London, UK
| | - Janine Stevens
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Tianming Liu
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Chinh Dang
- Virginia Merrill Bloedel Hearing Research Center, University of Washington, Seattle, WA, USA
| | - Yike Guo
- Data Science Institute, Imperial College London, London, UK
| | - Ning Zhong
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
- Department of Life Science and Informatics, Maebashi Institute of Technology, Maebashi, Japan
| | | | - Sean Hill
- Campbell Family Mental Health Research Institute, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | | | | | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia.
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA.
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, Nanjing, China.
| |
Collapse
|
25
|
Huaulmé A, Harada K, Nguyen QM, Park B, Hong S, Choi MK, Peven M, Li Y, Long Y, Dou Q, Kumar S, Lalithkumar S, Hongliang R, Matsuzaki H, Ishikawa Y, Harai Y, Kondo S, Mitsuishi M, Jannin P. PEg TRAnsfer Workflow recognition challenge report: Do multimodal data improve recognition? Comput Methods Programs Biomed 2023; 236:107561. [PMID: 37119774 DOI: 10.1016/j.cmpb.2023.107561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 04/06/2023] [Accepted: 04/18/2023] [Indexed: 05/21/2023]
Abstract
BACKGROUND AND OBJECTIVE In order to be context-aware, computer-assisted surgical systems require accurate, real-time automatic surgical workflow recognition. In the past several years, surgical video has been the most commonly-used modality for surgical workflow recognition. But with the democratization of robot-assisted surgery, new modalities, such as kinematics, are now accessible. Some previous methods use these new modalities as input for their models, but their added value has rarely been studied. This paper presents the design and results of the "PEg TRAnsfer Workflow recognition" (PETRAW) challenge with the objective of developing surgical workflow recognition methods based on one or more modalities and studying their added value. METHODS The PETRAW challenge included a data set of 150 peg transfer sequences performed on a virtual simulator. This data set included videos, kinematic data, semantic segmentation data, and annotations, which described the workflow at three levels of granularity: phase, step, and activity. Five tasks were proposed to the participants: three were related to the recognition at all granularities simultaneously using a single modality, and two addressed the recognition using multiple modalities. The mean application-dependent balanced accuracy (AD-Accuracy) was used as an evaluation metric to take into account class balance and is more clinically relevant than a frame-by-frame score. RESULTS Seven teams participated in at least one task with four participating in every task. The best results were obtained by combining video and kinematic data (AD-Accuracy of between 93% and 90% for the four teams that participated in all tasks). CONCLUSION The improvement of surgical workflow recognition methods using multiple modalities compared with unimodal methods was significant for all teams. However, the longer execution time required for video/kinematic-based methods(compared to only kinematic-based methods) must be considered. Indeed, one must ask if it is wise to increase computing time by 2000 to 20,000% only to increase accuracy by 3%. The PETRAW data set is publicly available at www.synapse.org/PETRAW to encourage further research in surgical workflow recognition.
Collapse
Affiliation(s)
- Arnaud Huaulmé
- Univ Rennes, INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| | - Kanako Harada
- Department of Mechanical Engineering, the University of Tokyo, Tokyo 113-8656, Japan
| | | | - Bogyu Park
- VisionAI hutom, Seoul, Republic of Korea
| | | | | | | | | | - Yonghao Long
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Qi Dou
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong
| | | | | | - Ren Hongliang
- National University of Singapore, Singapore, Singapore; The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Hiroki Matsuzaki
- National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan
| | - Yuto Ishikawa
- National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan
| | - Yuriko Harai
- National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan
| | | | - Manoru Mitsuishi
- Department of Mechanical Engineering, the University of Tokyo, Tokyo 113-8656, Japan
| | - Pierre Jannin
- Univ Rennes, INSERM, LTSI - UMR 1099, Rennes, F35000, France.
| |
Collapse
|
26
|
Watt A, Lee J, Toews M, Gilardino MS. Smartphone Integration of Artificial Intelligence for Automated Plagiocephaly Diagnosis. Plast Reconstr Surg Glob Open 2023; 11:e4985. [PMID: 37197011 PMCID: PMC10184988 DOI: 10.1097/gox.0000000000004985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 03/17/2023] [Indexed: 05/19/2023]
Abstract
Positional plagiocephaly is a pediatric condition with important cosmetic implications affecting ∼40% of infants under 12 months of age. Early diagnosis and treatment initiation is imperative in achieving satisfactory outcomes; improved diagnostic modalities are needed to support this goal. This study aimed to determine whether a smartphone-based artificial intelligence tool could diagnose positional plagiocephaly. Methods A prospective validation study was conducted at a large tertiary care center with two recruitment sites: (1) newborn nursery, (2) pediatric craniofacial surgery clinic. Eligible children were aged 0-12 months with no history of hydrocephalus, intracranial tumors, intracranial hemorrhage, intracranial hardware, or prior craniofacial surgery. Successful artificial intelligence diagnosis required identification of the presence and severity of positional plagiocephaly. Results A total of 89 infants were prospectively enrolled from the craniofacial surgery clinic (n = 25, 17 male infants [68%], eight female infants [32%], mean age 8.44 months) and newborn nursery (n = 64, 29 male infants [45%], 25 female infants [39%], mean age 0 months). The model obtained a diagnostic accuracy of 85.39% compared with a standard clinical examination with a disease prevalence of 48%. Sensitivity was 87.50% [95% CI, 75.94-98.42] with a specificity of 83.67% [95% CI, 72.35-94.99]. Precision was 81.40%, while likelihood ratios (positive and negative) were 5.36 and 0.15, respectively. The F1-score was 84.34%. Conclusions The smartphone-based artificial intelligence algorithm accurately diagnosed positional plagiocephaly in a clinical environment. This technology may provide value by helping guide specialist consultation and enabling longitudinal quantitative monitoring of cranial shape.
Collapse
Affiliation(s)
- Ayden Watt
- From the Department of Experimental Surgery, McGill University, Montreal, Canada
| | - James Lee
- Division of Plastic and Reconstructive Surgery, McGill University Health Center, Montreal, Canada
| | - Matthew Toews
- École de Technologie Supérieure, Department of Systems Engineering, Montréal, Canada
| | - Mirko S. Gilardino
- Division of Plastic and Reconstructive Surgery, McGill University Health Center, Montreal, Canada
| |
Collapse
|
27
|
Clunie DA, Flanders A, Taylor A, Erickson B, Bialecki B, Brundage D, Gutman D, Prior F, Seibert JA, Perry J, Gichoya JW, Kirby J, Andriole K, Geneslaw L, Moore S, Fitzgerald TJ, Tellis W, Xiao Y, Farahani K, Luo J, Rosenthal A, Kandarpa K, Rosen R, Goetz K, Babcock D, Xu B, Hsiao J. Report of the Medical Image De-Identification (MIDI) Task Group - Best Practices and Recommendations. ArXiv 2023:arXiv:2303.10473v2. [PMID: 37033463 PMCID: PMC10081345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Grants] [Subscribe] [Scholar Register] [Indexed: 04/11/2023]
Affiliation(s)
| | | | | | | | | | | | | | - Fred Prior
- University of Arkansas for Medical Sciences
| | | | | | | | - Justin Kirby
- Frederick National Laboratory for Cancer Research
| | | | | | | | | | | | - Ying Xiao
- University of Pennsylvania Health System
| | | | - James Luo
- National Heart, Lung, and Blood Institute (NHLBI)
| | - Alex Rosenthal
- National Institute of Allergy and Infectious Diseases (NIAID)
| | - Kris Kandarpa
- National Institute of Biomedical Imaging and Bioengineering (NIBIB)
| | - Rebecca Rosen
- Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD)
| | | | - Debra Babcock
- National Institute of Neurological Disorders and Stroke (NINDS)
| | - Ben Xu
- National Institute on Alcohol Abuse and Alcoholism (NIAAA)
| | | |
Collapse
|
28
|
Rong R, Wang S, Zhang X, Wen Z, Cheng X, Jia L, Yang DM, Xie Y, Zhan X, Xiao G. Enhanced Pathology Image Quality with Restore-Generative Adversarial Network. Am J Pathol 2023; 193:404-416. [PMID: 36669682 PMCID: PMC10123520 DOI: 10.1016/j.ajpath.2022.12.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 12/12/2022] [Accepted: 12/20/2022] [Indexed: 01/20/2023]
Abstract
Whole slide imaging is becoming a routine procedure in clinical diagnosis. Advanced image analysis techniques have been developed to assist pathologists in disease diagnosis, staging, subtype classification, and risk stratification. Recently, deep learning algorithms have achieved state-of-the-art performances in various imaging analysis tasks, including tumor region segmentation, nuclei detection, and disease classification. However, widespread clinical use of these algorithms is hampered by their performances often degrading due to image quality issues commonly seen in real-world pathology imaging data such as low resolution, blurring regions, and staining variation. Restore-Generative Adversarial Network (GAN), a deep learning model, was developed to improve the imaging qualities by restoring blurred regions, enhancing low resolution, and normalizing staining colors. The results demonstrate that Restore-GAN can significantly improve image quality, which leads to improved model robustness and performance for existing deep learning algorithms in pathology image analysis. Restore-GAN has the potential to be used to facilitate the applications of deep learning models in digital pathology analyses.
Collapse
Affiliation(s)
- Ruichen Rong
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Shidan Wang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Xinyi Zhang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Zhuoyu Wen
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Xian Cheng
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Liwei Jia
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Donghan M Yang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Yang Xie
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas; Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, Texas; Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Xiaowei Zhan
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas; Center for the Genetics of Host Defense, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Guanghua Xiao
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas; Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, Texas; Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, Texas.
| |
Collapse
|
29
|
Guzene L, Beddok A, Nioche C, Modzelewski R, Loiseau C, Salleron J, Thariat J. Assessing Interobserver Variability in the Delineation of Structures in Radiation Oncology: A Systematic Review. Int J Radiat Oncol Biol Phys 2023; 115:1047-1060. [PMID: 36423741 DOI: 10.1016/j.ijrobp.2022.11.021] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 11/04/2022] [Accepted: 11/09/2022] [Indexed: 11/23/2022]
Abstract
PURPOSE The delineation of target volumes and organs at risk is the main source of uncertainty in radiation therapy. Numerous interobserver variability (IOV) studies have been conducted, often with unclear methodology and nonstandardized reporting. We aimed to identify the parameters chosen in conducting delineation IOV studies and assess their performances and limits. METHODS AND MATERIALS We conducted a systematic literature review to highlight major points of heterogeneity and missing data in IOV studies published between 2018 and 2021. For the main used metrics, we did in silico analyses to assess their limits in specific clinical situations. RESULTS All disease sites were represented in the 66 studies examined. Organs at risk were studied independently of tumor site in 29% of reviewed IOV studies. In 65% of studies, statistical analyses were performed. No gold standard (GS; ie, reference) was defined in 36% of studies. A single expert was considered as the GS in 21% of studies, without testing intraobserver variability. All studies reported both absolute and relative indices, including the Dice similarity coefficient (DSC) in 68% and the Hausdorff distance (HD) in 42%. Limitations were shown in silico for small structures when using the DSC and dependence on irregular shapes when using the HD. Variations in DSC values were large between studies, and their thresholds were inconsistent. Most studies (51%) included 1 to 10 cases. The median number of observers or experts was 7 (range, 2-35). The intraclass correlation coefficient was reported in only 9% of cases. Investigating the feasibility of studying IOV in delineation, a minimum of 8 observers with 3 cases, or 11 observers with 2 cases, was required to demonstrate moderate reproducibility. CONCLUSIONS Implementation of future IOV studies would benefit from a more standardized methodology: clear definitions of the gold standard and metrics and a justification of the tradeoffs made in the choice of the number of observers and number of delineated cases should be provided.
Collapse
Affiliation(s)
- Leslie Guzene
- Department of Radiation Oncology, University Hospital of Amiens, Amiens, France
| | - Arnaud Beddok
- Department of Radiation Oncology, Institut Curie, Paris/Saint-Cloud/Orsay, France; Laboratory of Translational Imaging in Oncology (LITO), InsermUMR, Institut Curie, Orsay, France
| | - Christophe Nioche
- Laboratory of Translational Imaging in Oncology (LITO), InsermUMR, Institut Curie, Orsay, France
| | - Romain Modzelewski
- LITIS - EA4108-Quantif, Normastic, University of Rouen, and Nuclear Medicine Department, Henri Becquerel Center, Rouen, France
| | - Cedric Loiseau
- Department of Radiation Oncology, Centre François Baclesse; ARCHADE Research Community Caen, France; Département de Biostatistiques, Institut de Cancérologie de Lorraine, Vandœuvre-lès-Nancy, France
| | - Julia Salleron
- Département de Biostatistiques, Institut de Cancérologie de Lorraine, Vandœuvre-lès-Nancy, France
| | - Juliette Thariat
- Department of Radiation Oncology, Centre François Baclesse; ARCHADE Research Community Caen, France; Laboratoire de Physique Corpusculaire, Caen, France; Unicaen-Université de Normandie, Caen, France.
| |
Collapse
|
30
|
Vega C, Schneider R, Satagopam V. Analysis: Flawed Datasets of Monkeypox Skin Images. J Med Syst 2023; 47:37. [PMID: 36933065 PMCID: PMC10024024 DOI: 10.1007/s10916-023-01928-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 02/26/2023] [Indexed: 03/19/2023]
Abstract
The self-proclaimed first publicly available dataset of Monkeypox skin images consists of medically irrelevant images extracted from Google and photography repositories through a process denominated web-scrapping. Yet, this did not stop other researchers from employing it to build Machine Learning (ML) solutions aimed at computer-aided diagnosis of Monkeypox and other viral infections presenting skin lesions. Neither did it stop the reviewers or editors from publishing these subsequent works in peer-reviewed journals. Several of these works claimed extraordinary performance in the classification of Monkeypox, Chickenpox and Measles, employing ML and the aforementioned dataset. In this work, we analyse the initiator work that has catalysed the development of several ML solutions, and whose popularity is continuing to grow. Further, we provide a rebuttal experiment that showcases the risks of such methodologies, proving that the ML solutions do not necessarily obtain their performance from the features relevant to the diseases at issue.
Collapse
Affiliation(s)
- Carlos Vega
- Bioinformatics Core, University of Luxembourg, Luxembourg Centre for Systems Biomedicine, Av. du Swing 6, Belvaux, 4367, Luxembourg.
| | - Reinhard Schneider
- Bioinformatics Core, University of Luxembourg, Luxembourg Centre for Systems Biomedicine, Av. du Swing 6, Belvaux, 4367, Luxembourg
| | - Venkata Satagopam
- Bioinformatics Core, University of Luxembourg, Luxembourg Centre for Systems Biomedicine, Av. du Swing 6, Belvaux, 4367, Luxembourg
| |
Collapse
|
31
|
Rädsch T, Reinke A, Weru V, Tizabi MD, Schreck N, Kavur AE, Pekdemir B, Roß T, Kopp-Schneider A, Maier-Hein L. Labelling instructions matter in biomedical image analysis. NAT MACH INTELL 2023. [DOI: 10.1038/s42256-023-00625-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
AbstractBiomedical image analysis algorithm validation depends on high-quality annotation of reference datasets, for which labelling instructions are key. Despite their importance, their optimization remains largely unexplored. Here we present a systematic study of labelling instructions and their impact on annotation quality in the field. Through comprehensive examination of professional practice and international competitions registered at the Medical Image Computing and Computer Assisted Intervention Society, the largest international society in the biomedical imaging field, we uncovered a discrepancy between annotators’ needs for labelling instructions and their current quality and availability. On the basis of an analysis of 14,040 images annotated by 156 annotators from four professional annotation companies and 708 Amazon Mechanical Turk crowdworkers using instructions with different information density levels, we further found that including exemplary images substantially boosts annotation performance compared with text-only descriptions, while solely extending text descriptions does not. Finally, professional annotators constantly outperform Amazon Mechanical Turk crowdworkers. Our study raises awareness for the need of quality standards in biomedical image analysis labelling instructions.
Collapse
|
32
|
Roß T, Bruno P, Reinke A, Wiesenfarth M, Koeppel L, Full PM, Pekdemir B, Godau P, Trofimova D, Isensee F, Adler TJ, Tran TN, Moccia S, Calimeri F, Müller-Stich BP, Kopp-Schneider A, Maier-Hein L. Beyond rankings: Learning (more) from algorithm validation. Med Image Anal 2023; 86:102765. [PMID: 36965252 DOI: 10.1016/j.media.2023.102765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 05/24/2022] [Accepted: 02/08/2023] [Indexed: 03/06/2023]
Abstract
Challenges have become the state-of-the-art approach to benchmark image analysis algorithms in a comparative manner. While the validation on identical data sets was a great step forward, results analysis is often restricted to pure ranking tables, leaving relevant questions unanswered. Specifically, little effort has been put into the systematic investigation on what characterizes images in which state-of-the-art algorithms fail. To address this gap in the literature, we (1) present a statistical framework for learning from challenges and (2) instantiate it for the specific task of instrument instance segmentation in laparoscopic videos. Our framework relies on the semantic meta data annotation of images, which serves as foundation for a General Linear Mixed Models (GLMM) analysis. Based on 51,542 meta data annotations performed on 2,728 images, we applied our approach to the results of the Robust Medical Instrument Segmentation Challenge (ROBUST-MIS) challenge 2019 and revealed underexposure, motion and occlusion of instruments as well as the presence of smoke or other objects in the background as major sources of algorithm failure. Our subsequent method development, tailored to the specific remaining issues, yielded a deep learning model with state-of-the-art overall performance and specific strengths in the processing of images in which previous methods tended to fail. Due to the objectivity and generic applicability of our approach, it could become a valuable tool for validation in the field of medical image analysis and beyond.
Collapse
Affiliation(s)
- Tobias Roß
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany; Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| | - Pierangela Bruno
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; Department of Mathematics and Computer Science, University of Calabria, Rende, Italy
| | - Annika Reinke
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Germany
| | - Manuel Wiesenfarth
- Division of Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Lisa Koeppel
- Section Clinical Tropical Medicine, Heidelberg University, Heidelberg, Germany
| | - Peter M Full
- Medical Faculty, Heidelberg University, Heidelberg, Germany; Division of Medical Image Computing (MIC), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Bünyamin Pekdemir
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Patrick Godau
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Germany
| | - Darya Trofimova
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; HIP Applied Computer Vision Lab, MIC, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Fabian Isensee
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany; Division of Medical Image Computing (MIC), German Cancer Research Center (DKFZ), Heidelberg, Germany; HIP Applied Computer Vision Lab, MIC, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tim J Adler
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Thuy N Tran
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Italy
| | - Francesco Calimeri
- Department of Mathematics and Computer Science, University of Calabria, Rende, Italy
| | - Beat P Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | | | - Lena Maier-Hein
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany; Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Germany; Germany and National Center for Tumor Diseases (NCT), Heidelberg, Germany
| |
Collapse
|
33
|
Abstract
Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient's anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery.
Collapse
Affiliation(s)
- François Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| |
Collapse
|
34
|
Wagner M, Müller-Stich BP, Kisilenko A, Tran D, Heger P, Mündermann L, Lubotsky DM, Müller B, Davitashvili T, Capek M, Reinke A, Reid C, Yu T, Vardazaryan A, Nwoye CI, Padoy N, Liu X, Lee EJ, Disch C, Meine H, Xia T, Jia F, Kondo S, Reiter W, Jin Y, Long Y, Jiang M, Dou Q, Heng PA, Twick I, Kirtac K, Hosgor E, Bolmgren JL, Stenzel M, von Siemens B, Zhao L, Ge Z, Sun H, Xie D, Guo M, Liu D, Kenngott HG, Nickel F, Frankenberg MV, Mathis-Ullrich F, Kopp-Schneider A, Maier-Hein L, Speidel S, Bodenstedt S. Comparative validation of machine learning algorithms for surgical workflow and skill analysis with the HeiChole benchmark. Med Image Anal 2023; 86:102770. [PMID: 36889206 DOI: 10.1016/j.media.2023.102770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Revised: 02/03/2023] [Accepted: 02/08/2023] [Indexed: 02/23/2023]
Abstract
PURPOSE Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. METHODS To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. RESULTS F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). CONCLUSION Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery.
Collapse
Affiliation(s)
- Martin Wagner
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany.
| | - Beat-Peter Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Anna Kisilenko
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Duc Tran
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Patrick Heger
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
| | - Lars Mündermann
- Data Assisted Solutions, Corporate Research & Technology, KARL STORZ SE & Co. KG, Dr. Karl-Storz-Str. 34, 78332 Tuttlingen
| | - David M Lubotsky
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Benjamin Müller
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Tornike Davitashvili
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Manuela Capek
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Annika Reinke
- Div. Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; HIP Helmholtz Imaging Platform, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Im Neuenheimer Feld 205, 69120 Heidelberg
| | - Carissa Reid
- Division of Biostatistics, German Cancer Research Center, Im Neuenheimer Feld 280, Heidelberg, Germany
| | - Tong Yu
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Armine Vardazaryan
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Chinedu Innocent Nwoye
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, France. 300 bd Sébastien Brant - CS 10413, F-67412 Illkirch Cedex, France; IHU Strasbourg, France. 1 Place de l'hôpital, 67000 Strasbourg, France
| | - Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, 111 Michigan Ave NW, Washington, DC 20010, USA
| | - Eung-Joo Lee
- University of Maryland, College Park, 2405 A V Williams Building, College Park, MD 20742, USA
| | - Constantin Disch
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Str. 2, 28359 Bremen, Germany
| | - Hans Meine
- Fraunhofer Institute for Digital Medicine MEVIS, Max-von-Laue-Str. 2, 28359 Bremen, Germany; University of Bremen, FB3, Medical Image Computing Group, ℅ Fraunhofer MEVIS, Am Fallturm 1, 28359 Bremen, Germany
| | - Tong Xia
- Lab for Medical Imaging and Digital Surgery, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Fucang Jia
- Lab for Medical Imaging and Digital Surgery, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Satoshi Kondo
- Konika Minolta, Inc., 1-2, Sakura-machi, Takatsuki, Oasak 569-8503, Japan
| | - Wolfgang Reiter
- Wintegral GmbH, Ehrenbreitsteiner Str. 36, 80993 München, Germany
| | - Yueming Jin
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Yonghao Long
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Meirui Jiang
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Qi Dou
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Pheng Ann Heng
- Department of Computer Science and Engineering, Ho Sin-Hang Engineering Building, The Chinese University of Hong Kong, Sha Tin, NT, Hong Kong
| | - Isabell Twick
- Caresyntax GmbH, Komturstr. 18A, 12099 Berlin, Germany
| | - Kadir Kirtac
- Caresyntax GmbH, Komturstr. 18A, 12099 Berlin, Germany
| | - Enes Hosgor
- Caresyntax GmbH, Komturstr. 18A, 12099 Berlin, Germany
| | | | | | | | - Long Zhao
- Hikvision Research Institute, Hangzhou, China
| | - Zhenxiao Ge
- Hikvision Research Institute, Hangzhou, China
| | - Haiming Sun
- Hikvision Research Institute, Hangzhou, China
| | - Di Xie
- Hikvision Research Institute, Hangzhou, China
| | - Mengqi Guo
- School of Computing, National University of Singapore, Computing 1, No.13 Computing Drive, 117417, Singapore
| | - Daochang Liu
- National Engineering Research Center of Visual Technology, School of Computer Science, Peking University, Beijing, China
| | - Hannes G Kenngott
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
| | - Felix Nickel
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120 Heidelberg, Germany
| | - Moritz von Frankenberg
- Department of Surgery, Salem Hospital of the Evangelische Stadtmission Heidelberg, Zeppelinstrasse 11-33, 69121 Heidelberg, Germany
| | - Franziska Mathis-Ullrich
- Health Robotics and Automation Laboratory, Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Geb. 40.28, KIT Campus Süd, Engler-Bunte-Ring 8, 76131 Karlsruhe, Germany
| | - Annette Kopp-Schneider
- Division of Biostatistics, German Cancer Research Center, Im Neuenheimer Feld 280, Heidelberg, Germany
| | - Lena Maier-Hein
- Div. Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; HIP Helmholtz Imaging Platform, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Im Neuenheimer Feld 205, 69120 Heidelberg; Medical Faculty, Heidelberg University, Im Neuenheimer Feld 672, 69120 Heidelberg
| | - Stefanie Speidel
- Div. Translational Surgical Oncology, National Center for Tumor Diseases Dresden, Fetscherstraße 74, 01307 Dresden, Germany; Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI) of Technische Universität Dresden, 01062 Dresden, Germany
| | - Sebastian Bodenstedt
- Div. Translational Surgical Oncology, National Center for Tumor Diseases Dresden, Fetscherstraße 74, 01307 Dresden, Germany; Cluster of Excellence "Centre for Tactile Internet with Human-in-the-Loop" (CeTI) of Technische Universität Dresden, 01062 Dresden, Germany
| |
Collapse
|
35
|
Ibragimov B, Arzamasov K, Maksudov B, Kiselev S, Mongolin A, Mustafaev T, Ibragimova D, Evteeva K, Andreychenko A, Morozov S. A 178-clinical-center experiment of integrating AI solutions for lung pathology diagnosis. Sci Rep 2023; 13:1135. [PMID: 36670118 PMCID: PMC9859802 DOI: 10.1038/s41598-023-27397-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 01/02/2023] [Indexed: 01/22/2023] Open
Abstract
In 2020, an experiment testing AI solutions for lung X-ray analysis on a multi-hospital network was conducted. The multi-hospital network linked 178 Moscow state healthcare centers, where all chest X-rays from the network were redirected to a research facility, analyzed with AI, and returned to the centers. The experiment was formulated as a public competition with monetary awards for participating industrial and research teams. The task was to perform the binary detection of abnormalities from chest X-rays. For the objective real-life evaluation, no training X-rays were provided to the participants. This paper presents one of the top-performing AI frameworks from this experiment. First, the framework used two EfficientNets, histograms of gradients, Haar feature ensembles, and local binary patterns to recognize whether an input image represents an acceptable lung X-ray sample, meaning the X-ray is not grayscale inverted, is a frontal chest X-ray, and completely captures both lung fields. Second, the framework extracted the region with lung fields and then passed them to a multi-head DenseNet, where the heads recognized the patient's gender, age and the potential presence of abnormalities, and generated the heatmap with the abnormality regions highlighted. During one month of the experiment from 11.23.2020 to 12.25.2020, 17,888 cases have been analyzed by the framework with 11,902 cases having radiological reports with the reference diagnoses that were unequivocally parsed by the experiment organizers. The performance measured in terms of the area under receiving operator curve (AUC) was 0.77. The AUC for individual diseases ranged from 0.55 for herniation to 0.90 for pneumothorax.
Collapse
Affiliation(s)
- Bulat Ibragimov
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.
| | - Kirill Arzamasov
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Healthcare Department, Moscow, Russia
| | - Bulat Maksudov
- School of Electronic Engineering, Dublin City University, Dublin, Ireland
| | | | - Alexander Mongolin
- Innopolis University, Innopolis, Russia
- Nova Information Management School, Universidade Nova de Lisboa, Lisbon, Portugal
| | - Tamerlan Mustafaev
- Innopolis University, Innopolis, Russia
- University Clinic Kazan State University, Kazan, Russia
| | | | - Ksenia Evteeva
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Healthcare Department, Moscow, Russia
| | - Anna Andreychenko
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Healthcare Department, Moscow, Russia
| | - Sergey Morozov
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies of the Moscow Healthcare Department, Moscow, Russia
- Osimis SA, Liege, Belgium
| |
Collapse
|
36
|
Foucart A, Debeir O, Decaestecker C. Shortcomings and areas for improvement in digital pathology image segmentation challenges. Comput Med Imaging Graph 2023; 103:102155. [PMID: 36525770 DOI: 10.1016/j.compmedimag.2022.102155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 09/13/2022] [Accepted: 11/27/2022] [Indexed: 12/13/2022]
Abstract
Digital pathology image analysis challenges have been organised regularly since 2010, often with events hosted at major conferences and results published in high-impact journals. These challenges mobilise a lot of energy from organisers, participants, and expert annotators (especially for image segmentation challenges). This study reviews image segmentation challenges in digital pathology and the top-ranked methods, with a particular focus on how reference annotations are generated and how the methods' predictions are evaluated. We found important shortcomings in the handling of inter-expert disagreement and the relevance of the evaluation process chosen. We also noted key problems with the quality control of various challenge elements that can lead to uncertainties in the published results. Our findings show the importance of greatly increasing transparency in the reporting of challenge results, and the need to make publicly available the evaluation codes, test set annotations and participants' predictions. The aim is to properly ensure the reproducibility and interpretation of the results and to increase the potential for exploitation of the substantial work done in these challenges.
Collapse
Affiliation(s)
- Adrien Foucart
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium.
| | - Olivier Debeir
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium
| | - Christine Decaestecker
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium.
| |
Collapse
|
37
|
Hirvasniemi J, Runhaar J, van der Heijden RA, Zokaeinikoo M, Yang M, Li X, Tan J, Rajamohan HR, Zhou Y, Deniz CM, Caliva F, Iriondo C, Lee JJ, Liu F, Martinez AM, Namiri N, Pedoia V, Panfilov E, Bayramoglu N, Nguyen HH, Nieminen MT, Saarakkala S, Tiulpin A, Lin E, Li A, Li V, Dam EB, Chaudhari AS, Kijowski R, Bierma-Zeinstra S, Oei EHG, Klein S. The KNee OsteoArthritis Prediction (KNOAP2020) challenge: An image analysis challenge to predict incident symptomatic radiographic knee osteoarthritis from MRI and X-ray images. Osteoarthritis Cartilage 2023; 31:115-125. [PMID: 36243308 DOI: 10.1016/j.joca.2022.10.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 09/02/2022] [Accepted: 10/03/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVES The KNee OsteoArthritis Prediction (KNOAP2020) challenge was organized to objectively compare methods for the prediction of incident symptomatic radiographic knee osteoarthritis within 78 months on a test set with blinded ground truth. DESIGN The challenge participants were free to use any available data sources to train their models. A test set of 423 knees from the Prevention of Knee Osteoarthritis in Overweight Females (PROOF) study consisting of magnetic resonance imaging (MRI) and X-ray image data along with clinical risk factors at baseline was made available to all challenge participants. The ground truth outcomes, i.e., which knees developed incident symptomatic radiographic knee osteoarthritis (according to the combined ACR criteria) within 78 months, were not provided to the participants. To assess the performance of the submitted models, we used the area under the receiver operating characteristic curve (ROCAUC) and balanced accuracy (BACC). RESULTS Seven teams submitted 23 entries in total. A majority of the algorithms were trained on data from the Osteoarthritis Initiative. The model with the highest ROCAUC (0.64 (95% confidence interval (CI): 0.57-0.70)) used deep learning to extract information from X-ray images combined with clinical variables. The model with the highest BACC (0.59 (95% CI: 0.52-0.65)) ensembled three different models that used automatically extracted X-ray and MRI features along with clinical variables. CONCLUSION The KNOAP2020 challenge established a benchmark for predicting incident symptomatic radiographic knee osteoarthritis. Accurate prediction of incident symptomatic radiographic knee osteoarthritis is a complex and still unsolved problem requiring additional investigation.
Collapse
Affiliation(s)
- J Hirvasniemi
- Department of Radiology & Nuclear Medicine, Erasmus MC University Medical Center, Rotterdam, the Netherlands.
| | - J Runhaar
- Department of General Practice, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - R A van der Heijden
- Department of Radiology & Nuclear Medicine, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - M Zokaeinikoo
- Department of Biomedical Engineering, Cleveland Clinic, Cleveland, USA
| | - M Yang
- Department of Biomedical Engineering, Cleveland Clinic, Cleveland, USA
| | - X Li
- Department of Biomedical Engineering, Cleveland Clinic, Cleveland, USA
| | - J Tan
- Department of Radiology, New York University Langone Health, New York, USA
| | - H R Rajamohan
- Department of Radiology, New York University Langone Health, New York, USA
| | - Y Zhou
- Department of Radiology, New York University Langone Health, New York, USA
| | - C M Deniz
- Department of Radiology, New York University Langone Health, New York, USA
| | - F Caliva
- Department of Radiology, University of California, San Francisco, San Francisco, USA
| | - C Iriondo
- Department of Radiology, University of California, San Francisco, San Francisco, USA
| | - J J Lee
- Department of Radiology, University of California, San Francisco, San Francisco, USA
| | - F Liu
- Department of Radiology, University of California, San Francisco, San Francisco, USA
| | - A M Martinez
- Department of Radiology, University of California, San Francisco, San Francisco, USA
| | - N Namiri
- Department of Radiology, University of California, San Francisco, San Francisco, USA
| | - V Pedoia
- Department of Radiology, University of California, San Francisco, San Francisco, USA
| | - E Panfilov
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - N Bayramoglu
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - H H Nguyen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - M T Nieminen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - S Saarakkala
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland; Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland
| | - A Tiulpin
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - E Lin
- Akousist Co., Ltd., Taoyuan City, Taiwan
| | - A Li
- Akousist Co., Ltd., Taoyuan City, Taiwan
| | - V Li
- Akousist Co., Ltd., Taoyuan City, Taiwan
| | - E B Dam
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - A S Chaudhari
- Department of Radiology, Stanford University, Stanford, USA
| | - R Kijowski
- Department of Radiology, New York University Langone Health, New York, USA
| | - S Bierma-Zeinstra
- Department of General Practice, Erasmus MC University Medical Center, Rotterdam, the Netherlands; Department of Orthopedics & Sport Medicine, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - E H G Oei
- Department of Radiology & Nuclear Medicine, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - S Klein
- Department of Radiology & Nuclear Medicine, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| |
Collapse
|
38
|
Dorent R, Kujawa A, Ivory M, Bakas S, Rieke N, Joutard S, Glocker B, Cardoso J, Modat M, Batmanghelich K, Belkov A, Calisto MB, Choi JW, Dawant BM, Dong H, Escalera S, Fan Y, Hansen L, Heinrich MP, Joshi S, Kashtanova V, Kim HG, Kondo S, Kruse CN, Lai-Yuen SK, Li H, Liu H, Ly B, Oguz I, Shin H, Shirokikh B, Su Z, Wang G, Wu J, Xu Y, Yao K, Zhang L, Ourselin S, Shapey J, Vercauteren T. CrossMoDA 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation. Med Image Anal 2023; 83:102628. [PMID: 36283200 DOI: 10.1016/j.media.2022.102628] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 06/17/2022] [Accepted: 09/10/2022] [Indexed: 02/04/2023]
Abstract
Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT1 scans (N=105) and unpaired non-annotated hrT2 scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score - VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score - VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.
Collapse
Affiliation(s)
- Reuben Dorent
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom.
| | - Aaron Kujawa
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Marina Ivory
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, USA; Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | | | - Samuel Joutard
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Ben Glocker
- Department of Computing, Imperial College London, Department of Computing, London, United Kingdom
| | - Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | | | - Arseniy Belkov
- Moscow Institute of Physics and Technology, Moscow, Russia
| | | | - Jae Won Choi
- Department of Radiology, Armed Forces Yangju Hospital, Yangju, Republic of Korea
| | | | - Hexin Dong
- Center for Data Science, Peking University, Beijing, China
| | - Sergio Escalera
- Artificial Intelligence in Medicine Lab (BCN-AIM) and Human Behavior Analysis Lab (HuPBA), Universitat de Barcelona, Barcelona, Spain
| | - Yubo Fan
- Vanderbilt University, Nashville, USA
| | - Lasse Hansen
- Institute of Medical Informatics, Universität zu Lübeck, Germany
| | | | - Smriti Joshi
- Artificial Intelligence in Medicine Lab (BCN-AIM) and Human Behavior Analysis Lab (HuPBA), Universitat de Barcelona, Barcelona, Spain
| | | | - Hyeon Gyu Kim
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | | | | | | | - Hao Li
- Vanderbilt University, Nashville, USA
| | - Han Liu
- Vanderbilt University, Nashville, USA
| | - Buntheng Ly
- Inria, Université Côte d'Azur, Sophia Antipolis, France
| | - Ipek Oguz
- Vanderbilt University, Nashville, USA
| | - Hyungseob Shin
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Boris Shirokikh
- Skolkovo Institute of Science and Technology, Moscow, Russia; Artificial Intelligence Research Institute (AIRI), Moscow, Russia
| | - Zixian Su
- University of Liverpool, Liverpool, United Kingdom; School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Jianghao Wu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yanwu Xu
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, USA
| | - Kai Yao
- University of Liverpool, Liverpool, United Kingdom; School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, China
| | - Li Zhang
- Center for Data Science, Peking University, Beijing, China
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Jonathan Shapey
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom; Department of Neurosurgery, King's College Hospital, London, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| |
Collapse
|
39
|
Sharan L, Kelm H, Romano G, Karck M, De Simone R, Engelhardt S. mvHOTA: A multi-view higher order tracking accuracy metric to measure temporal and spatial associations in multi-point tracking. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization 2022. [DOI: 10.1080/21681163.2022.2159535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Affiliation(s)
- Lalith Sharan
- DZHK (German Centre for Cardiovascular Research), partner site Heidelberg/Mannheim, Germany
| | - Halvar Kelm
- Department of Cardiac Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Gabriele Romano
- DZHK (German Centre for Cardiovascular Research), partner site Heidelberg/Mannheim, Germany
| | - Matthias Karck
- Department of Cardiac Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Raffaele De Simone
- Department of Cardiac Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Sandy Engelhardt
- DZHK (German Centre for Cardiovascular Research), partner site Heidelberg/Mannheim, Germany
| |
Collapse
|
40
|
Dinsdale NK, Bluemke E, Sundaresan V, Jenkinson M, Smith SM, Namburete AIL. Challenges for machine learning in clinical translation of big data imaging studies. Neuron 2022; 110:3866-3881. [PMID: 36220099 DOI: 10.1016/j.neuron.2022.09.012] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 08/27/2021] [Accepted: 09/08/2022] [Indexed: 12/15/2022]
Abstract
Combining deep learning image analysis methods and large-scale imaging datasets offers many opportunities to neuroscience imaging and epidemiology. However, despite these opportunities and the success of deep learning when applied to a range of neuroimaging tasks and domains, significant barriers continue to limit the impact of large-scale datasets and analysis tools. Here, we examine the main challenges and the approaches that have been explored to overcome them. We focus on issues relating to data availability, interpretability, evaluation, and logistical challenges and discuss the problems that still need to be tackled to enable the success of "big data" deep learning approaches beyond research.
Collapse
Affiliation(s)
- Nicola K Dinsdale
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Oxford Machine Learning in NeuroImaging Lab, OMNI, Department of Computer Science, University of Oxford, Oxford, UK.
| | - Emma Bluemke
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Vaanathi Sundaresan
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, USA
| | - Mark Jenkinson
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Australian Institute for Machine Learning (AIML), School of Computer Science, University of Adelaide, Adelaide, SA, Australia; South Australian Health and Medical Research Institute (SAHMRI), North Terrace, Adelaide, SA, Australia
| | - Stephen M Smith
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Ana I L Namburete
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Oxford Machine Learning in NeuroImaging Lab, OMNI, Department of Computer Science, University of Oxford, Oxford, UK
| |
Collapse
|
41
|
Fell C, Mohammadi M, Morrison D, Arandjelovic O, Caie P, Harris-Birtill D. Reproducibility of deep learning in digital pathology whole slide image analysis. PLOS Digit Health 2022; 1:e0000145. [PMID: 36812609 PMCID: PMC9931349 DOI: 10.1371/journal.pdig.0000145] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 10/13/2022] [Indexed: 12/05/2022]
Abstract
For a method to be widely adopted in medical research or clinical practice, it needs to be reproducible so that clinicians and regulators can have confidence in its use. Machine learning and deep learning have a particular set of challenges around reproducibility. Small differences in the settings or the data used for training a model can lead to large differences in the outcomes of experiments. In this work, three top-performing algorithms from the Camelyon grand challenges are reproduced using only information presented in the associated papers and the results are then compared to those reported. Seemingly minor details were found to be critical to performance and yet their importance is difficult to appreciate until the actual reproduction is attempted. We observed that authors generally describe the key technical aspects of their models well but fail to maintain the same reporting standards when it comes to data preprocessing which is essential to reproducibility. As an important contribution of the present study and its findings, we introduce a reproducibility checklist that tabulates information that needs to be reported in histopathology ML-based work in order to make it reproducible.
Collapse
Affiliation(s)
- Christina Fell
- School of Computer Science, University of St Andrews, St Andrews, United Kingdom
| | - Mahnaz Mohammadi
- School of Computer Science, University of St Andrews, St Andrews, United Kingdom
| | - David Morrison
- School of Computer Science, University of St Andrews, St Andrews, United Kingdom
- * E-mail:
| | - Ognjen Arandjelovic
- School of Computer Science, University of St Andrews, St Andrews, United Kingdom
| | - Peter Caie
- Indica Labs, Albuquerque, New Mexico, United States of America
| | - David Harris-Birtill
- School of Computer Science, University of St Andrews, St Andrews, United Kingdom
| |
Collapse
|
42
|
De Backer P, Eckhoff JA, Simoens J, Müller DT, Allaeys C, Creemers H, Hallemeesch A, Mestdagh K, Van Praet C, Debbaut C, Decaestecker K, Bruns CJ, Meireles O, Mottrie A, Fuchs HF. Multicentric exploration of tool annotation in robotic surgery: lessons learned when starting a surgical artificial intelligence project. Surg Endosc 2022; 36:8533-8548. [PMID: 35941310 DOI: 10.1007/s00464-022-09487-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 07/16/2022] [Indexed: 01/06/2023]
Abstract
BACKGROUND Artificial intelligence (AI) holds tremendous potential to reduce surgical risks and improve surgical assessment. Machine learning, a subfield of AI, can be used to analyze surgical video and imaging data. Manual annotations provide veracity about the desired target features. Yet, methodological annotation explorations are limited to date. Here, we provide an exploratory analysis of the requirements and methods of instrument annotation in a multi-institutional team from two specialized AI centers and compile our lessons learned. METHODS We developed a bottom-up approach for team annotation of robotic instruments in robot-assisted partial nephrectomy (RAPN), which was subsequently validated in robot-assisted minimally invasive esophagectomy (RAMIE). Furthermore, instrument annotation methods were evaluated for their use in Machine Learning algorithms. Overall, we evaluated the efficiency and transferability of the proposed team approach and quantified performance metrics (e.g., time per frame required for each annotation modality) between RAPN and RAMIE. RESULTS We found a 0.05 Hz image sampling frequency to be adequate for instrument annotation. The bottom-up approach in annotation training and management resulted in accurate annotations and demonstrated efficiency in annotating large datasets. The proposed annotation methodology was transferrable between both RAPN and RAMIE. The average annotation time for RAPN pixel annotation ranged from 4.49 to 12.6 min per image; for vector annotation, we denote 2.92 min per image. Similar annotation times were found for RAMIE. Lastly, we elaborate on common pitfalls encountered throughout the annotation process. CONCLUSIONS We propose a successful bottom-up approach for annotator team composition, applicable to any surgical annotation project. Our results set the foundation to start AI projects for instrument detection, segmentation, and pose estimation. Due to the immense annotation burden resulting from spatial instrumental annotation, further analysis into sampling frequency and annotation detail needs to be conducted.
Collapse
Affiliation(s)
- Pieter De Backer
- ORSI Academy, Proefhoevestraat 12, 9090, Melle, Belgium.
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, Ghent, Belgium.
- IBiTech-Biommeda, Faculty of Engineering and Architecture, and CRIG, Ghent University, Ghent, Belgium.
- Department of Urology, Ghent University Hospital, Ghent, Belgium.
| | - Jennifer A Eckhoff
- Robotic Innovation Laboratory, Department of General, Visceral, Tumor and Transplantsurgery, University Hospital Cologne, Cologne, Germany
| | - Jente Simoens
- ORSI Academy, Proefhoevestraat 12, 9090, Melle, Belgium
| | - Dolores T Müller
- Robotic Innovation Laboratory, Department of General, Visceral, Tumor and Transplantsurgery, University Hospital Cologne, Cologne, Germany
| | - Charlotte Allaeys
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, Ghent, Belgium
| | - Heleen Creemers
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, Ghent, Belgium
| | - Amélie Hallemeesch
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, Ghent, Belgium
| | - Kenzo Mestdagh
- Department of Human Structure and Repair, Faculty of Medicine and Health Sciences, Ghent University, Ghent, Belgium
| | | | - Charlotte Debbaut
- IBiTech-Biommeda, Faculty of Engineering and Architecture, and CRIG, Ghent University, Ghent, Belgium
| | | | - Christiane J Bruns
- Robotic Innovation Laboratory, Department of General, Visceral, Tumor and Transplantsurgery, University Hospital Cologne, Cologne, Germany
| | - Ozanan Meireles
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, Boston, USA
| | - Alexandre Mottrie
- ORSI Academy, Proefhoevestraat 12, 9090, Melle, Belgium
- Department of Urology, OLV Hospital Aalst-Asse-Ninove, Aalst, Belgium
| | - Hans F Fuchs
- Robotic Innovation Laboratory, Department of General, Visceral, Tumor and Transplantsurgery, University Hospital Cologne, Cologne, Germany
| |
Collapse
|
43
|
Boutillon A, Borotikar B, Burdin V, Conze PH. Multi-structure bone segmentation in pediatric MR images with combined regularization from shape priors and adversarial network. Artif Intell Med 2022; 132:102364. [DOI: 10.1016/j.artmed.2022.102364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Revised: 05/13/2022] [Accepted: 07/10/2022] [Indexed: 11/02/2022]
|
44
|
Tampu IE, Eklund A, Haj-Hosseini N. Inflation of test accuracy due to data leakage in deep learning-based classification of OCT images. Sci Data 2022; 9:580. [PMID: 36138025 PMCID: PMC9500039 DOI: 10.1038/s41597-022-01618-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 08/09/2022] [Indexed: 11/10/2022] Open
Abstract
In the application of deep learning on optical coherence tomography (OCT) data, it is common to train classification networks using 2D images originating from volumetric data. Given the micrometer resolution of OCT systems, consecutive images are often very similar in both visible structures and noise. Thus, an inappropriate data split can result in overlap between the training and testing sets, with a large portion of the literature overlooking this aspect. In this study, the effect of improper dataset splitting on model evaluation is demonstrated for three classification tasks using three OCT open-access datasets extensively used, Kermany's and Srinivasan's ophthalmology datasets, and AIIMS breast tissue dataset. Results show that the classification performance is inflated by 0.07 up to 0.43 in terms of Matthews Correlation Coefficient (accuracy: 5% to 30%) for models tested on datasets with improper splitting, highlighting the considerable effect of dataset handling on model evaluation. This study intends to raise awareness on the importance of dataset splitting given the increased research interest in implementing deep learning on OCT data.
Collapse
Affiliation(s)
- Iulian Emil Tampu
- Department of Biomedical Engineering, Linköping University, 581 85, Linköping, Sweden. .,Center for Medical Image Science and Visualization, Linköping University, 581 85, Linköping, Sweden.
| | - Anders Eklund
- Department of Biomedical Engineering, Linköping University, 581 85, Linköping, Sweden.,Center for Medical Image Science and Visualization, Linköping University, 581 85, Linköping, Sweden.,Division of Statistics & Machine Learning, Department of Computer and Information Science, Linköping University, 581 83, Linköping, Sweden
| | - Neda Haj-Hosseini
- Department of Biomedical Engineering, Linköping University, 581 85, Linköping, Sweden.,Center for Medical Image Science and Visualization, Linköping University, 581 85, Linköping, Sweden
| |
Collapse
|
45
|
Roth HR, Xu Z, Tor-Díez C, Sanchez Jacob R, Zember J, Molto J, Li W, Xu S, Turkbey B, Turkbey E, Yang D, Harouni A, Rieke N, Hu S, Isensee F, Tang C, Yu Q, Sölter J, Zheng T, Liauchuk V, Zhou Z, Moltz JH, Oliveira B, Xia Y, Maier-Hein KH, Li Q, Husch A, Zhang L, Kovalev V, Kang L, Hering A, Vilaça JL, Flores M, Xu D, Wood B, Linguraru MG. Rapid artificial intelligence solutions in a pandemic-The COVID-19-20 Lung CT Lesion Segmentation Challenge. Med Image Anal 2022; 82:102605. [PMID: 36156419 PMCID: PMC9444848 DOI: 10.1016/j.media.2022.102605] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 07/01/2022] [Accepted: 08/25/2022] [Indexed: 11/30/2022]
Abstract
Artificial intelligence (AI) methods for the automatic detection and quantification of COVID-19 lesions in chest computed tomography (CT) might play an important role in the monitoring and management of the disease. We organized an international challenge and competition for the development and comparison of AI algorithms for this task, which we supported with public data and state-of-the-art benchmark methods. Board Certified Radiologists annotated 295 public images from two sources (A and B) for algorithms training (n=199, source A), validation (n=50, source A) and testing (n=23, source A; n=23, source B). There were 1,096 registered teams of which 225 and 98 completed the validation and testing phases, respectively. The challenge showed that AI models could be rapidly designed by diverse teams with the potential to measure disease or facilitate timely and patient-specific interventions. This paper provides an overview and the major outcomes of the COVID-19 Lung CT Lesion Segmentation Challenge — 2020.
Collapse
Affiliation(s)
- Holger R Roth
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany.
| | - Ziyue Xu
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Carlos Tor-Díez
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, WA, DC, USA
| | - Ramon Sanchez Jacob
- Division of Diagnostic Imaging and Radiology, Children's National Hospital, WA,DC, USA
| | - Jonathan Zember
- Division of Diagnostic Imaging and Radiology, Children's National Hospital, WA,DC, USA
| | - Jose Molto
- Division of Diagnostic Imaging and Radiology, Children's National Hospital, WA,DC, USA
| | - Wenqi Li
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Sheng Xu
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Baris Turkbey
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Evrim Turkbey
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Dong Yang
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Ahmed Harouni
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Nicola Rieke
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Shishuai Hu
- School of Computer Science and Engineering, Northwestern Polytechnical University, China
| | - Fabian Isensee
- Applied Computer Vision Lab, Helmholtz Imaging , Heidelberg, Germany; Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Qinji Yu
- Shanghai Jiao Tong University, China
| | - Jan Sölter
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, Luxembourg
| | - Tong Zheng
- School of Informatics, Nagoya University, Japan
| | - Vitali Liauchuk
- Biomedical Image Analysis Department, United Institute of Informatics Problems, Belarus
| | - Ziqi Zhou
- Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, China
| | | | - Bruno Oliveira
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal; Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal; 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Yong Xia
- School of Computer Science and Engineering, Northwestern Polytechnical University, China
| | - Klaus H Maier-Hein
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Qikai Li
- Shanghai Jiao Tong University, China
| | - Andreas Husch
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | | | - Vassili Kovalev
- Biomedical Image Analysis Department, United Institute of Informatics Problems, Belarus
| | - Li Kang
- Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, China
| | - Alessa Hering
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | - João L Vilaça
- 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Mona Flores
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Daguang Xu
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Bradford Wood
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, WA, DC, USA; School of Medicine and Health Sciences, George Washington University, WA, DC, USA
| |
Collapse
|
46
|
Arnold TC, Muthukrishnan R, Pattnaik AR, Sinha N, Gibson A, Gonzalez H, Das SR, Litt B, Englot DJ, Morgan VL, Davis KA, Stein JM. Deep learning-based automated segmentation of resection cavities on postsurgical epilepsy MRI. Neuroimage Clin 2022; 36:103154. [PMID: 35988342 PMCID: PMC9402390 DOI: 10.1016/j.nicl.2022.103154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 07/26/2022] [Accepted: 08/12/2022] [Indexed: 12/14/2022]
Abstract
Accurate segmentation of surgical resection sites is critical for clinical assessments and neuroimaging research applications, including resection extent determination, predictive modeling of surgery outcome, and masking image processing near resection sites. In this study, an automated resection cavity segmentation algorithm is developed for analyzing postoperative MRI of epilepsy patients and deployed in an easy-to-use graphical user interface (GUI) that estimates remnant brain volumes, including postsurgical hippocampal remnant tissue. This retrospective study included postoperative T1-weighted MRI from 62 temporal lobe epilepsy (TLE) patients who underwent resective surgery. The resection site was manually segmented and reviewed by a neuroradiologist (JMS). A majority vote ensemble algorithm was used to segment surgical resections, using 3 U-Net convolutional neural networks trained on axial, coronal, and sagittal slices, respectively. The algorithm was trained using 5-fold cross validation, with data partitioned into training (N = 27) testing (N = 9), and validation (N = 9) sets, and evaluated on a separate held-out test set (N = 17). Algorithm performance was assessed using Dice-Sørensen coefficient (DSC), Hausdorff distance, and volume estimates. Additionally, we deploy a fully-automated, GUI-based pipeline that compares resection segmentations with preoperative imaging and reports estimates of resected brain structures. The cross-validation and held-out test median DSCs were 0.84 ± 0.08 and 0.74 ± 0.22 (median ± interquartile range) respectively, which approach inter-rater reliability between radiologists (0.84-0.86) as reported in the literature. Median 95 % Hausdorff distances were 3.6 mm and 4.0 mm respectively, indicating high segmentation boundary confidence. Automated and manual resection volume estimates were highly correlated for both cross-validation (r = 0.94, p < 0.0001) and held-out test subjects (r = 0.87, p < 0.0001). Automated and manual segmentations overlapped in all 62 subjects, indicating a low false negative rate. In control subjects (N = 40), the classifier segmented no voxels (N = 33), <50 voxels (N = 5), or a small volumes<0.5 cm3 (N = 2), indicating a low false positive rate that can be controlled via thresholding. There was strong agreement between postoperative hippocampal remnant volumes determined using automated and manual resection segmentations (r = 0.90, p < 0.0001, mean absolute error = 6.3 %), indicating that automated resection segmentations can permit quantification of postoperative brain volumes after epilepsy surgery. Applications include quantification of postoperative remnant brain volumes, correction of deformable registration, and localization of removed brain regions for network modeling.
Collapse
Affiliation(s)
- T Campbell Arnold
- Department of Bioengineering, School of Engineering & Applied Science, University of Pennsylvania, Philadelphia, PA 19104, USA; Center for Neuroengineering and Therapeutics, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Ramya Muthukrishnan
- Center for Neuroengineering and Therapeutics, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Computer Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Akash R Pattnaik
- Department of Bioengineering, School of Engineering & Applied Science, University of Pennsylvania, Philadelphia, PA 19104, USA; Center for Neuroengineering and Therapeutics, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Nishant Sinha
- Center for Neuroengineering and Therapeutics, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Adam Gibson
- Center for Neuroengineering and Therapeutics, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Hannah Gonzalez
- Center for Neuroengineering and Therapeutics, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Sandhitsu R Das
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Brian Litt
- Department of Bioengineering, School of Engineering & Applied Science, University of Pennsylvania, Philadelphia, PA 19104, USA; Center for Neuroengineering and Therapeutics, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Dario J Englot
- Department of Neurological Surgery, Vanderbilt University Medical Center, Nashville, TN 37232, USA; Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, USA; Department of Biomedical Engineering, Vanderbilt University Medical Center, Nashville, TN 37232, USA; Institute of Imaging Science, Vanderbilt University Medical Center, Nashville, TN 37232, USA
| | - Victoria L Morgan
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, USA; Department of Biomedical Engineering, Vanderbilt University Medical Center, Nashville, TN 37232, USA; Institute of Imaging Science, Vanderbilt University Medical Center, Nashville, TN 37232, USA
| | - Kathryn A Davis
- Center for Neuroengineering and Therapeutics, University of Pennsylvania, Philadelphia, PA 19104, USA; Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Joel M Stein
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
47
|
Antonelli M, Reinke A, Bakas S, Farahani K, Kopp-Schneider A, Landman BA, Litjens G, Menze B, Ronneberger O, Summers RM, van Ginneken B, Bilello M, Bilic P, Christ PF, Do RKG, Gollub MJ, Heckers SH, Huisman H, Jarnagin WR, McHugo MK, Napel S, Pernicka JSG, Rhode K, Tobon-Gomez C, Vorontsov E, Meakin JA, Ourselin S, Wiesenfarth M, Arbeláez P, Bae B, Chen S, Daza L, Feng J, He B, Isensee F, Ji Y, Jia F, Kim I, Maier-Hein K, Merhof D, Pai A, Park B, Perslev M, Rezaiifar R, Rippel O, Sarasua I, Shen W, Son J, Wachinger C, Wang L, Wang Y, Xia Y, Xu D, Xu Z, Zheng Y, Simpson AL, Maier-Hein L, Cardoso MJ. The Medical Segmentation Decathlon. Nat Commun 2022; 13:4128. [PMID: 35840566 PMCID: PMC9287542 DOI: 10.1038/s41467-022-30695-9] [Citation(s) in RCA: 115] [Impact Index Per Article: 57.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 05/13/2022] [Indexed: 02/05/2023] Open
Abstract
International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. We organized the Medical Segmentation Decathlon (MSD)-a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities to investigate the hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. MSD results confirmed this hypothesis, moreover, MSD winner continued generalizing well to a wide range of other clinical problems for the next two years. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to scientists that are not versed in AI model training.
Collapse
Affiliation(s)
- Michela Antonelli
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK.
| | - Annika Reinke
- Div. Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Heidelberg, Germany.,HI Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany.,Faculty of Mathematics and Computer Science, University of Heidelberg, Heidelberg, Germany
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.,Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute (NIH), Bethesda, MD, USA
| | | | - Bennett A Landman
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Geert Litjens
- Radboud University Medical Center, Radboud Institute for Health Sciences, Nijmegen, The Netherlands
| | - Bjoern Menze
- Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | | | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center (NIH), Bethesda, MD, USA
| | - Bram van Ginneken
- Radboud University Medical Center, Radboud Institute for Health Sciences, Nijmegen, The Netherlands
| | - Michel Bilello
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Patrick Bilic
- Department of Informatics, Technische Universität München, München, Germany
| | - Patrick F Christ
- Department of Informatics, Technische Universität München, München, Germany
| | - Richard K G Do
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Marc J Gollub
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Stephan H Heckers
- Department of Psychiatry & Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Henkjan Huisman
- Radboud University Medical Center, Radboud Institute for Health Sciences, Nijmegen, The Netherlands
| | - William R Jarnagin
- Department of Surgery, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Maureen K McHugo
- Department of Psychiatry & Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Sandy Napel
- Department of Radiology, Stanford University, Stanford, CA, USA
| | | | - Kawal Rhode
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Catalina Tobon-Gomez
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Eugene Vorontsov
- Department of Computer Science and Software Engineering, École Polytechnique de Montréal, Montréal, QC, Canada
| | - James A Meakin
- Radboud University Medical Center, Radboud Institute for Health Sciences, Nijmegen, The Netherlands
| | - Sebastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Manuel Wiesenfarth
- Div. Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | | | | | - Laura Daza
- Universidad de los Andes, Bogota, Colombia
| | - Jianjiang Feng
- Department of Automation, Tsinghua University, Beijing, China
| | - Baochun He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Fabian Isensee
- HI Applied Computer Vision Lab, Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Yuanfeng Ji
- Department of Computer Science, Xiamen University, Xiamen, China
| | - Fucang Jia
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ildoo Kim
- Kakao Brain, Seongnam-si, Republic of Korea
| | - Klaus Maier-Hein
- Cerebriu A/S, Copenhagen, Denmark.,Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Dorit Merhof
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany.,Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Akshay Pai
- Cerebriu A/S, Copenhagen, Denmark.,Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | | | - Mathias Perslev
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | | | - Oliver Rippel
- Institute of Imaging & Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Ignacio Sarasua
- Lab for Artificial Intelligence in Medical Imaging (AI-Med), Department of Child and Adolescent Psychiatry, University Hospital, LMU München, Germany
| | - Wei Shen
- MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China
| | | | - Christian Wachinger
- Lab for Artificial Intelligence in Medical Imaging (AI-Med), Department of Child and Adolescent Psychiatry, University Hospital, LMU München, Germany
| | - Liansheng Wang
- Department of Computer Science, Xiamen University, Xiamen, China
| | - Yan Wang
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China
| | - Yingda Xia
- Johns Hopkins University, Baltimore, MD, USA
| | | | - Zhanwei Xu
- Department of Automation, Tsinghua University, Beijing, China
| | | | - Amber L Simpson
- School of Computing/Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada
| | - Lena Maier-Hein
- Div. Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Heidelberg, Germany.,HI Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany.,Faculty of Mathematics and Computer Science, University of Heidelberg, Heidelberg, Germany.,Medical Faculty, University of Heidelberg, Heidelberg, Germany
| | - M Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| |
Collapse
|
48
|
Wardlaw JM, Mair G, von Kummer R, Williams MC, Li W, Storkey AJ, Trucco E, Liebeskind DS, Farrall A, Bath PM, White P. Accuracy of Automated Computer-Aided Diagnosis for Stroke Imaging: A Critical Evaluation of Current Evidence. Stroke 2022; 53:2393-2403. [PMID: 35440170 DOI: 10.1161/strokeaha.121.036204] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
There is increasing interest in computer applications, using artificial intelligence methodologies, to perform health care tasks previously performed by humans, particularly in medical imaging for diagnosis. In stroke, there are now commercial artificial intelligence software for use with computed tomography or MR imaging to identify acute ischemic brain tissue pathology, arterial obstruction on computed tomography angiography or as hyperattenuated arteries on computed tomography, brain hemorrhage, or size of perfusion defects. A rapid, accurate diagnosis may aid treatment decisions for individual patients and could improve outcome if it leads to effective and safe treatment; or conversely, to disaster if a delayed or incorrect diagnosis results in inappropriate treatment. Despite this potential clinical impact, diagnostic tools including artificial intelligence methods are not subjected to the same clinical evaluation standards as are mandatory for drugs. Here, we provide an evidence-based review of the pros and cons of commercially available automated methods for medical imaging diagnosis, including those based on artificial intelligence, to diagnose acute brain pathology on computed tomography or magnetic resonance imaging in patients with stroke.
Collapse
Affiliation(s)
- Joanna M Wardlaw
- Centre for Clinical Brain Sciences, UK Dementia Research Institute Centre at the University of Edinburgh, Little France, United Kingdom (J.M.W., G.M., W.L., A.F.)
| | - Grant Mair
- Centre for Clinical Brain Sciences, UK Dementia Research Institute Centre at the University of Edinburgh, Little France, United Kingdom (J.M.W., G.M., W.L., A.F.)
| | - Rüdiger von Kummer
- Institute of Diagnostic and Interventional Neuroradiology, Universitätsklinikum Carl Gustav Carus, Dresden, Germany (R.v.K.)
| | - Michelle C Williams
- Centre for Cardiovascular Science, University of Edinburgh, Little France, United Kingdom (M.C.W.)
| | - Wenwen Li
- Centre for Clinical Brain Sciences, UK Dementia Research Institute Centre at the University of Edinburgh, Little France, United Kingdom (J.M.W., G.M., W.L., A.F.)
| | | | - Emanuel Trucco
- VAMPIRE project, Computing, School of Science and Engineering, University of Dundee (E.T.)
| | | | - Andrew Farrall
- Centre for Clinical Brain Sciences, UK Dementia Research Institute Centre at the University of Edinburgh, Little France, United Kingdom (J.M.W., G.M., W.L., A.F.)
| | - Philip M Bath
- Stroke Trials Unit, Mental Health & Clinical Neuroscience, University of Nottingham, Queen's Medical Centre campus, United Kingdom (P.M.B.)
| | - Philip White
- Translational and Clinical Research Institute, Newcastle University, Newcastle upon Tyne and Newcastle upon Tyne Hospitals NHS Trust, United Kingdom (P.W.)
| |
Collapse
|
49
|
Saw SN, Ng KH. Current challenges of implementing artificial intelligence in medical imaging. Phys Med 2022; 100:12-17. [PMID: 35714523 DOI: 10.1016/j.ejmp.2022.06.003] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 04/26/2022] [Accepted: 06/11/2022] [Indexed: 12/31/2022] Open
Abstract
The idea of using artificial intelligence (AI) in medical practice has gained vast interest due to its potential to revolutionise healthcare systems. However, only some AI algorithms are utilised due to systems' uncertainties, besides the never-ending list of ethical and legal concerns. This paper intends to provide an overview of current AI challenges in medical imaging with an ultimate aim to foster better and effective communication among various stakeholders to encourage AI technology development. We identify four main challenges in implementing AI in medical imaging, supported with consequences and past events when these problems fail to mitigate. Among them is the creation of a robust AI algorithm that is fair, trustable and transparent. Another issue is on data governance, in which best practices in data sharing must be established to promote trust and protect the patients' privacy. Next, stakeholders, such as the government, technology companies and hospital management, should come to a consensus in creating trustworthy AI policies and regulatory frameworks, which is the fourth challenge, to support, encourage and spur innovation in digital AI healthcare technology. Lastly, we discussed the efforts of various organizations such as the World Health Organisation (WHO), American College of Radiology (ACR), European Society of Radiology (ESR) and Radiological Society of North America (RSNA), who are already actively pursuing ethical developments in AI. The efforts by various stakeholders will eventually overcome hurdles and the deployment of AI-driven healthcare applications in clinical practice will become a reality and hence lead to better healthcare services and outcomes.
Collapse
Affiliation(s)
- Shier Nee Saw
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia.
| | - Kwan Hoong Ng
- Department of Biomedical Imaging, Universiti Malaya, 50603 Kuala Lumpur, Malaysia; Department of Medical Imaging and Radiological Sciences, College of Health Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan
| |
Collapse
|
50
|
Altini N, Prencipe B, Cascarano GD, Brunetti A, Brunetti G, Triggiani V, Carnimeo L, Marino F, Guerriero A, Villani L, Scardapane A, Bevilacqua V. Liver, kidney and spleen segmentation from CT scans and MRI with deep learning: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.08.157] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|