1
|
Haubold J, Zeng K, Farhand S, Stalke S, Steinberg H, Bos D, Meetschen M, Kureishi A, Zensen S, Goeser T, Maier S, Forsting M, Nensa F. AI co-pilot: content-based image retrieval for the reading of rare diseases in chest CT. Sci Rep 2023; 13:4336. [PMID: 36928759 PMCID: PMC10020154 DOI: 10.1038/s41598-023-29949-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 02/13/2023] [Indexed: 03/18/2023] Open
Abstract
The aim of the study was to evaluate the impact of the newly developed Similar patient search (SPS) Web Service, which supports reading complex lung diseases in computed tomography (CT), on the diagnostic accuracy of residents. SPS is an image-based search engine for pre-diagnosed cases along with related clinical reference content ( https://eref.thieme.de ). The reference database was constructed using 13,658 annotated regions of interest (ROIs) from 621 patients, comprising 69 lung diseases. For validation, 50 CT scans were evaluated by five radiology residents without SPS, and three months later with SPS. The residents could give a maximum of three diagnoses per case. A maximum of 3 points was achieved if the correct diagnosis without any additional diagnoses was provided. The residents achieved an average score of 17.6 ± 5.0 points without SPS. By using SPS, the residents increased their score by 81.8% to 32.0 ± 9.5 points. The improvement of the score per case was highly significant (p = 0.0001). The residents required an average of 205.9 ± 350.6 s per case (21.9% increase) when SPS was used. However, in the second half of the cases, after the residents became more familiar with SPS, this increase dropped to 7%. Residents' average score in reading complex chest CT scans improved by 81.8% when the AI-driven SPS with integrated clinical reference content was used. The increase in time per case due to the use of the SPS was minimal.
Collapse
Affiliation(s)
- Johannes Haubold
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.
- Institute of Artificial Intelligence in Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.
| | - Ke Zeng
- Siemens Medical Solutions Inc., Malvern, PA, USA
| | | | | | - Hannah Steinberg
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Denise Bos
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Mathias Meetschen
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Anisa Kureishi
- Institute of Artificial Intelligence in Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Sebastian Zensen
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Tim Goeser
- Department of Radiology and Neuroradiology, Kliniken Maria Hilf, Viersener Str. 450, 41063, Mönchengladbach, NRW, Germany
| | - Sandra Maier
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Michael Forsting
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Felix Nensa
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
- Institute of Artificial Intelligence in Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| |
Collapse
|
2
|
Garcea F, Serra A, Lamberti F, Morra L. Data augmentation for medical imaging: A systematic literature review. Comput Biol Med 2023; 152:106391. [PMID: 36549032 DOI: 10.1016/j.compbiomed.2022.106391] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 11/22/2022] [Accepted: 11/29/2022] [Indexed: 12/13/2022]
Abstract
Recent advances in Deep Learning have largely benefited from larger and more diverse training sets. However, collecting large datasets for medical imaging is still a challenge due to privacy concerns and labeling costs. Data augmentation makes it possible to greatly expand the amount and variety of data available for training without actually collecting new samples. Data augmentation techniques range from simple yet surprisingly effective transformations such as cropping, padding, and flipping, to complex generative models. Depending on the nature of the input and the visual task, different data augmentation strategies are likely to perform differently. For this reason, it is conceivable that medical imaging requires specific augmentation strategies that generate plausible data samples and enable effective regularization of deep neural networks. Data augmentation can also be used to augment specific classes that are underrepresented in the training set, e.g., to generate artificial lesions. The goal of this systematic literature review is to investigate which data augmentation strategies are used in the medical domain and how they affect the performance of clinical tasks such as classification, segmentation, and lesion detection. To this end, a comprehensive analysis of more than 300 articles published in recent years (2018-2022) was conducted. The results highlight the effectiveness of data augmentation across organs, modalities, tasks, and dataset sizes, and suggest potential avenues for future research.
Collapse
Affiliation(s)
- Fabio Garcea
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy
| | - Alessio Serra
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy
| | - Fabrizio Lamberti
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy
| | - Lia Morra
- Dipartimento di Automatica e Informatica, Politecnico di Torino, C.so Duca degli Abruzzi, 24, Torino, 10129, Italy.
| |
Collapse
|
3
|
Seah J, Boeken T, Sapoval M, Goh GS. Prime Time for Artificial Intelligence in Interventional Radiology. Cardiovasc Intervent Radiol 2022; 45:283-289. [PMID: 35031822 PMCID: PMC8921296 DOI: 10.1007/s00270-021-03044-4,] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 11/28/2021] [Indexed: 01/27/2025]
Abstract
Machine learning techniques, also known as artificial intelligence (AI), is about to dramatically change workflow and diagnostic capabilities in diagnostic radiology. The interest in AI in Interventional Radiology is rapidly gathering pace. With this early interest in AI in procedural medicine, IR could lead the way to AI research and clinical applications for all interventional medical fields. This review will address an overview of machine learning, radiomics and AI in the field of interventional radiology, enumerating the possible applications of such techniques, while also describing techniques to overcome the challenge of limited data when applying these techniques in interventional radiology. Lastly, this review will address common errors in research in this field and suggest pathways for those interested in learning and becoming involved about AI.
Collapse
Affiliation(s)
- Jarrel Seah
- Department of Radiology, Alfred Health, Melbourne, VIC, Australia
- Department of Neuroscience, Monash University, Melbourne, VIC, Australia
| | - Tom Boeken
- Vascular and Oncological Interventional Radiology, University of Paris, Hopital Européen Georges Pompidou, Paris, France
| | - Marc Sapoval
- Vascular and Oncological Interventional Radiology, University of Paris, Hopital Européen Georges Pompidou, Paris, France
| | - Gerard S Goh
- Department of Radiology, Alfred Health, Melbourne, VIC, Australia.
- Department of Surgery, Central Clinical School, Monash University, Melbourne, VIC, Australia.
- National Trauma Research Institute, Central Clinical School, Monash University, Melbourne, VIC, Australia.
| |
Collapse
|
4
|
Prime Time for Artificial Intelligence in Interventional Radiology. Cardiovasc Intervent Radiol 2022; 45:283-289. [PMID: 35031822 PMCID: PMC8921296 DOI: 10.1007/s00270-021-03044-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 11/28/2021] [Indexed: 12/16/2022]
Abstract
Machine learning techniques, also known as artificial intelligence (AI), is about to dramatically change workflow and diagnostic capabilities in diagnostic radiology. The interest in AI in Interventional Radiology is rapidly gathering pace. With this early interest in AI in procedural medicine, IR could lead the way to AI research and clinical applications for all interventional medical fields. This review will address an overview of machine learning, radiomics and AI in the field of interventional radiology, enumerating the possible applications of such techniques, while also describing techniques to overcome the challenge of limited data when applying these techniques in interventional radiology. Lastly, this review will address common errors in research in this field and suggest pathways for those interested in learning and becoming involved about AI.
Collapse
|
5
|
Pérez-García F, Sparks R, Ourselin S. TorchIO: A Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106236. [PMID: 34311413 DOI: 10.5281/zenodo.4296288] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 06/09/2021] [Indexed: 05/28/2023]
Abstract
BACKGROUND AND OBJECTIVE Processing of medical images such as MRI or CT presents different challenges compared to RGB images typically used in computer vision. These include a lack of labels for large datasets, high computational costs, and the need of metadata to describe the physical properties of voxels. Data augmentation is used to artificially increase the size of the training datasets. Training with image subvolumes or patches decreases the need for computational power. Spatial metadata needs to be carefully taken into account in order to ensure a correct alignment and orientation of volumes. METHODS We present TorchIO, an open-source Python library to enable efficient loading, preprocessing, augmentation and patch-based sampling of medical images for deep learning. TorchIO follows the style of PyTorch and integrates standard medical image processing libraries to efficiently process images during training of neural networks. TorchIO transforms can be easily composed, reproduced, traced and extended. Most transforms can be inverted, making the library suitable for test-time augmentation and estimation of aleatoric uncertainty in the context of segmentation. We provide multiple generic preprocessing and augmentation operations as well as simulation of MRI-specific artifacts. RESULTS Source code, comprehensive tutorials and extensive documentation for TorchIO can be found at http://torchio.rtfd.io/. The package can be installed from the Python Package Index (PyPI) running pip install torchio. It includes a command-line interface which allows users to apply transforms to image files without using Python. Additionally, we provide a graphical user interface within a TorchIO extension in 3D Slicer to visualize the effects of transforms. CONCLUSION TorchIO was developed to help researchers standardize medical image processing pipelines and allow them to focus on the deep learning experiments. It encourages good open-science practices, as it supports experiment reproducibility and is version-controlled so that the software can be cited precisely. Due to its modularity, the library is compatible with other frameworks for deep learning with medical images.
Collapse
Affiliation(s)
- Fernando Pérez-García
- Department of Medical Physics and Biomedical Engineering, University College London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK; School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK.
| | - Rachel Sparks
- School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK
| |
Collapse
|
6
|
Pérez-García F, Sparks R, Ourselin S. TorchIO: A Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106236. [PMID: 34311413 PMCID: PMC8542803 DOI: 10.1016/j.cmpb.2021.106236] [Citation(s) in RCA: 183] [Impact Index Per Article: 45.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 06/09/2021] [Indexed: 05/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Processing of medical images such as MRI or CT presents different challenges compared to RGB images typically used in computer vision. These include a lack of labels for large datasets, high computational costs, and the need of metadata to describe the physical properties of voxels. Data augmentation is used to artificially increase the size of the training datasets. Training with image subvolumes or patches decreases the need for computational power. Spatial metadata needs to be carefully taken into account in order to ensure a correct alignment and orientation of volumes. METHODS We present TorchIO, an open-source Python library to enable efficient loading, preprocessing, augmentation and patch-based sampling of medical images for deep learning. TorchIO follows the style of PyTorch and integrates standard medical image processing libraries to efficiently process images during training of neural networks. TorchIO transforms can be easily composed, reproduced, traced and extended. Most transforms can be inverted, making the library suitable for test-time augmentation and estimation of aleatoric uncertainty in the context of segmentation. We provide multiple generic preprocessing and augmentation operations as well as simulation of MRI-specific artifacts. RESULTS Source code, comprehensive tutorials and extensive documentation for TorchIO can be found at http://torchio.rtfd.io/. The package can be installed from the Python Package Index (PyPI) running pip install torchio. It includes a command-line interface which allows users to apply transforms to image files without using Python. Additionally, we provide a graphical user interface within a TorchIO extension in 3D Slicer to visualize the effects of transforms. CONCLUSION TorchIO was developed to help researchers standardize medical image processing pipelines and allow them to focus on the deep learning experiments. It encourages good open-science practices, as it supports experiment reproducibility and is version-controlled so that the software can be cited precisely. Due to its modularity, the library is compatible with other frameworks for deep learning with medical images.
Collapse
Affiliation(s)
- Fernando Pérez-García
- Department of Medical Physics and Biomedical Engineering, University College London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK; School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK.
| | - Rachel Sparks
- School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK
| |
Collapse
|
7
|
Chlap P, Min H, Vandenberg N, Dowling J, Holloway L, Haworth A. A review of medical image data augmentation techniques for deep learning applications. J Med Imaging Radiat Oncol 2021; 65:545-563. [PMID: 34145766 DOI: 10.1111/1754-9485.13261] [Citation(s) in RCA: 249] [Impact Index Per Article: 62.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 05/23/2021] [Indexed: 12/21/2022]
Abstract
Research in artificial intelligence for radiology and radiotherapy has recently become increasingly reliant on the use of deep learning-based algorithms. While the performance of the models which these algorithms produce can significantly outperform more traditional machine learning methods, they do rely on larger datasets being available for training. To address this issue, data augmentation has become a popular method for increasing the size of a training dataset, particularly in fields where large datasets aren't typically available, which is often the case when working with medical images. Data augmentation aims to generate additional data which is used to train the model and has been shown to improve performance when validated on a separate unseen dataset. This approach has become commonplace so to help understand the types of data augmentation techniques used in state-of-the-art deep learning models, we conducted a systematic review of the literature where data augmentation was utilised on medical images (limited to CT and MRI) to train a deep learning model. Articles were categorised into basic, deformable, deep learning or other data augmentation techniques. As artificial intelligence models trained using augmented data make their way into the clinic, this review aims to give an insight to these techniques and confidence in the validity of the models produced.
Collapse
Affiliation(s)
- Phillip Chlap
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia
| | - Hang Min
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Nym Vandenberg
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| | - Jason Dowling
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Lois Holloway
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia.,Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales, Australia
| | - Annette Haworth
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|