1
|
Fiagbedzi E, Hasford F, Tagoe SN. The influence of artificial intelligence on the work of the medical physicist in radiotherapy practice: a short review. BJR Open 2023; 5:20230003. [PMID: 37942499 PMCID: PMC10630976 DOI: 10.1259/bjro.20230003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 04/11/2023] [Accepted: 08/02/2023] [Indexed: 11/10/2023] Open
Abstract
There have been many applications and influences of Artificial intelligence (AI) in many sectors and its professionals, that of radiotherapy and the medical physicist is no different. AI and technological advances have necessitated changing roles of medical physicists due to the development of modernized technology with image-guided accessories for the radiotherapy treatment of cancer patients. Given the changing role of medical physicists in ensuring patient safety and optimal care, AI can reshape radiotherapy practice now and in some years to come. Medical physicists' roles in radiotherapy practice have evolved to meet technology for the management of better patient care in the age of modern radiotherapy. This short review provides an insight into the influence of AI on the changing role of medical physicists in each specific chain of the workflow in radiotherapy in which they are involved.
Collapse
Affiliation(s)
| | - Francis Hasford
- Department of Medical Physics, Accra-Ghana, University of Ghana, Accra, Ghana
| | - Samuel Nii Tagoe
- Department of Medical Physics, Accra-Ghana, University of Ghana, Accra, Ghana
| |
Collapse
|
2
|
Hosny A, Bitterman DS, Guthier CV, Qian JM, Roberts H, Perni S, Saraf A, Peng LC, Pashtan I, Ye Z, Kann BH, Kozono DE, Christiani D, Catalano PJ, Aerts HJWL, Mak RH. Clinical validation of deep learning algorithms for radiotherapy targeting of non-small-cell lung cancer: an observational study. Lancet Digit Health 2022; 4:e657-e666. [PMID: 36028289 PMCID: PMC9435511 DOI: 10.1016/s2589-7500(22)00129-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 06/05/2022] [Accepted: 06/24/2022] [Indexed: 04/08/2023]
Abstract
BACKGROUND Artificial intelligence (AI) and deep learning have shown great potential in streamlining clinical tasks. However, most studies remain confined to in silico validation in small internal cohorts, without external validation or data on real-world clinical utility. We developed a strategy for the clinical validation of deep learning models for segmenting primary non-small-cell lung cancer (NSCLC) tumours and involved lymph nodes in CT images, which is a time-intensive step in radiation treatment planning, with large variability among experts. METHODS In this observational study, CT images and segmentations were collected from eight internal and external sources from the USA, the Netherlands, Canada, and China, with patients from the Maastro and Harvard-RT1 datasets used for model discovery (segmented by a single expert). Validation consisted of interobserver and intraobserver benchmarking, primary validation, functional validation, and end-user testing on the following datasets: multi-delineation, Harvard-RT1, Harvard-RT2, RTOG-0617, NSCLC-radiogenomics, Lung-PET-CT-Dx, RIDER, and thorax phantom. Primary validation consisted of stepwise testing on increasingly external datasets using measures of overlap including volumetric dice (VD) and surface dice (SD). Functional validation explored dosimetric effect, model failure modes, test-retest stability, and accuracy. End-user testing with eight experts assessed automated segmentations in a simulated clinical setting. FINDINGS We included 2208 patients imaged between 2001 and 2015, with 787 patients used for model discovery and 1421 for model validation, including 28 patients for end-user testing. Models showed an improvement over the interobserver benchmark (multi-delineation dataset; VD 0·91 [IQR 0·83-0·92], p=0·0062; SD 0·86 [0·71-0·91], p=0·0005), and were within the intraobserver benchmark. For primary validation, AI performance on internal Harvard-RT1 data (segmented by the same expert who segmented the discovery data) was VD 0·83 (IQR 0·76-0·88) and SD 0·79 (0·68-0·88), within the interobserver benchmark. Performance on internal Harvard-RT2 data segmented by other experts was VD 0·70 (0·56-0·80) and SD 0·50 (0·34-0·71). Performance on RTOG-0617 clinical trial data was VD 0·71 (0·60-0·81) and SD 0·47 (0·35-0·59), with similar results on diagnostic radiology datasets NSCLC-radiogenomics and Lung-PET-CT-Dx. Despite these geometric overlap results, models yielded target volumes with equivalent radiation dose coverage to those of experts. We also found non-significant differences between de novo expert and AI-assisted segmentations. AI assistance led to a 65% reduction in segmentation time (5·4 min; p<0·0001) and a 32% reduction in interobserver variability (SD; p=0·013). INTERPRETATION We present a clinical validation strategy for AI models. We found that in silico geometric segmentation metrics might not correlate with clinical utility of the models. Experts' segmentation style and preference might affect model performance. FUNDING US National Institutes of Health and EU European Research Council.
Collapse
Affiliation(s)
- Ahmed Hosny
- Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA; Department of Radiation Oncology, Brigham and Women's Hospital and Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Danielle S Bitterman
- Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA; Department of Radiation Oncology, Brigham and Women's Hospital and Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA; Computational Health Informatics Program, Boston Children's Hospital, Boston, MA
| | - Christian V Guthier
- Department of Radiation Oncology, Brigham and Women's Hospital and Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Jack M Qian
- Harvard Radiation Oncology Program, Brigham and Women's Hospital, Dana-Farber Cancer Institute, Mass General Brigham, Boston, MA
| | - Hannah Roberts
- Harvard Radiation Oncology Program, Brigham and Women's Hospital, Dana-Farber Cancer Institute, Mass General Brigham, Boston, MA
| | - Subha Perni
- Harvard Radiation Oncology Program, Brigham and Women's Hospital, Dana-Farber Cancer Institute, Mass General Brigham, Boston, MA
| | - Anurag Saraf
- Harvard Radiation Oncology Program, Brigham and Women's Hospital, Dana-Farber Cancer Institute, Mass General Brigham, Boston, MA
| | - Luke C Peng
- Harvard Radiation Oncology Program, Brigham and Women's Hospital, Dana-Farber Cancer Institute, Mass General Brigham, Boston, MA
| | - Itai Pashtan
- Department of Radiation Oncology, Brigham and Women's Hospital and Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Zezhong Ye
- Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA; Department of Radiation Oncology, Brigham and Women's Hospital and Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Benjamin H Kann
- Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA; Department of Radiation Oncology, Brigham and Women's Hospital and Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - David E Kozono
- Department of Radiation Oncology, Brigham and Women's Hospital and Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - David Christiani
- Harvard T H Chan School of Public Health, Massachusetts General Hospital and Harvard Medical School, Baltimore, MD, USA
| | - Paul J Catalano
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Hugo J W L Aerts
- Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA; Department of Radiation Oncology, Brigham and Women's Hospital and Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA; Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, Netherlands
| | - Raymond H Mak
- Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA; Department of Radiation Oncology, Brigham and Women's Hospital and Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
3
|
Abolaban FA. Review of recent impacts of artificial intelligence for radiation therapy procedures. Radiat Phys Chem Oxf Engl 1993 2022. [DOI: 10.1016/j.radphyschem.2022.110469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
4
|
Osman AFI, Tamam NM. Attention-aware 3D U-Net convolutional neural network for knowledge-based planning 3D dose distribution prediction of head-and-neck cancer. J Appl Clin Med Phys 2022; 23:e13630. [PMID: 35533234 PMCID: PMC9278691 DOI: 10.1002/acm2.13630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 04/20/2022] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Deep learning-based knowledge-based planning (KBP) methods have been introduced for radiotherapy dose distribution prediction to reduce the planning time and maintain consistent high-quality plans. This paper presents a novel KBP model using an attention-gating mechanism and a three-dimensional (3D) U-Net for intensity-modulated radiation therapy (IMRT) 3D dose distribution prediction in head-and-neck cancer. METHODS A total of 340 head-and-neck cancer plans, representing the OpenKBP-2020 AAPM Grand Challenge data set, were used in this study. All patients were treated with the IMRT technique and a dose prescription of 70 Gy. The data set was randomly divided into 64%/16%/20% as training/validation/testing cohorts. An attention-gated 3D U-Net architecture model was developed to predict full 3D dose distribution. The developed model was trained using the mean-squared error loss function, Adam optimization algorithm, a learning rate of 0.001, 120 epochs, and batch size of 4. In addition, a baseline U-Net model was also similarly trained for comparison. The model performance was evaluated on the testing data set by comparing the generated dose distributions against the ground-truth dose distributions using dose statistics and clinical dosimetric indices. Its performance was also compared to the baseline model and the reported results of other deep learning-based dose prediction models. RESULTS The proposed attention-gated 3D U-Net model showed high capability in accurately predicting 3D dose distributions that closely replicated the ground-truth dose distributions of 68 plans in the test set. The average value of the mean absolute dose error was 2.972 ± 1.220 Gy (vs. 2.920 ± 1.476 Gy for a baseline U-Net) in the brainstem, 4.243 ± 1.791 Gy (vs. 4.530 ± 2.295 Gy for a baseline U-Net) in the left parotid, 4.622 ± 1.975 Gy (vs. 4.223 ± 1.816 Gy for a baseline U-Net) in the right parotid, 3.346 ± 1.198 Gy (vs. 2.958 ± 0.888 Gy for a baseline U-Net) in the spinal cord, 6.582 ± 3.748 Gy (vs. 5.114 ± 2.098 Gy for a baseline U-Net) in the esophagus, 4.756 ± 1.560 Gy (vs. 4.992 ± 2.030 Gy for a baseline U-Net) in the mandible, 4.501 ± 1.784 Gy (vs. 4.925 ± 2.347 Gy for a baseline U-Net) in the larynx, 2.494 ± 0.953 Gy (vs. 2.648 ± 1.247 Gy for a baseline U-Net) in the PTV_70, and 2.432 ± 2.272 Gy (vs. 2.811 ± 2.896 Gy for a baseline U-Net) in the body contour. The average difference in predicting the D99 value for the targets (PTV_70, PTV_63, and PTV_56) was 2.50 ± 1.77 Gy. For the organs at risk, the average difference in predicting the D m a x ${D_{max}}$ (brainstem, spinal cord, and mandible) and D m e a n ${D_{mean}}$ (left parotid, right parotid, esophagus, and larynx) values was 1.43 ± 1.01 and 2.44 ± 1.73 Gy, respectively. The average value of the homogeneity index was 7.99 ± 1.45 for the predicted plans versus 5.74 ± 2.95 for the ground-truth plans, whereas the average value of the conformity index was 0.63 ± 0.17 for the predicted plans versus 0.89 ± 0.19 for the ground-truth plans. The proposed model needs less than 5 s to predict a full 3D dose distribution of 64 × 64 × 64 voxels for a new patient that is sufficient for real-time applications. CONCLUSIONS The attention-gated 3D U-Net model demonstrated a capability in predicting accurate 3D dose distributions for head-and-neck IMRT plans with consistent quality. The prediction performance of the proposed model was overall superior to a baseline standard U-Net model, and it was also competitive to the performance of the best state-of-the-art dose prediction method reported in the literature. The proposed model could be used to obtain dose distributions for decision-making before planning, quality assurance of planning, and guiding-automated planning for improved plan consistency, quality, and planning efficiency.
Collapse
Affiliation(s)
| | - Nissren M Tamam
- Department of Physics, College of Science, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| |
Collapse
|
5
|
Almberg SS, Lervåg C, Frengen J, Eidem M, Abramova T, Nordstrand C, Alsaker M, Tøndel H, Raj SX, Wanderås AD. Training, validation, and clinical implementation of a deep-learning segmentation model for radiotherapy of loco-regional breast cancer. Radiother Oncol 2022; 173:62-68. [DOI: 10.1016/j.radonc.2022.05.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 05/07/2022] [Accepted: 05/18/2022] [Indexed: 11/29/2022]
|
6
|
A High-Throughput In Vitro Radiobiology Platform for Megavoltage Photon Linear Accelerator Studies. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We designed and developed a multiwell tissue culture plate irradiation setup, and intensity modulated radiotherapy plans were generated for 96-, 24-, and 6-well tissue culture plates. We demonstrated concordance between planned and measured/imaged radiation dose profiles using radiochromic film, a 2D ion chamber array, and an electronic portal-imaging device. Cell viability, clonogenic potential, and g-H2AX foci analyses showed no significant differences between intensity-modulated radiotherapy and open-field, homogeneous irradiations. This novel platform may help to expedite radiobiology experiments within a clinical environment and may be used for wide-ranging ex vivo radiobiology applications.
Collapse
|
7
|
Kruis MF. Improving radiation physics, tumor visualisation, and treatment quantification in radiotherapy with spectral or dual-energy CT. J Appl Clin Med Phys 2021; 23:e13468. [PMID: 34743405 PMCID: PMC8803285 DOI: 10.1002/acm2.13468] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 10/13/2021] [Accepted: 10/19/2021] [Indexed: 12/11/2022] Open
Abstract
Over the past decade, spectral or dual‐energy CT has gained relevancy, especially in oncological radiology. Nonetheless, its use in the radiotherapy (RT) clinic remains limited. This review article aims to give an overview of the current state of spectral CT and to explore opportunities for applications in RT. In this article, three groups of benefits of spectral CT over conventional CT in RT are recognized. Firstly, spectral CT provides more information of physical properties of the body, which can improve dose calculation. Furthermore, it improves the visibility of tumors, for a wide variety of malignancies as well as organs‐at‐risk OARs, which could reduce treatment uncertainty. And finally, spectral CT provides quantitative physiological information, which can be used to personalize and quantify treatment.
Collapse
|
8
|
Jamtheim Gustafsson C, Lempart M, Swärd J, Persson E, Nyholm T, Thellenberg Karlsson C, Scherman J. Deep learning-based classification and structure name standardization for organ at risk and target delineations in prostate cancer radiotherapy. J Appl Clin Med Phys 2021; 22:51-63. [PMID: 34623738 PMCID: PMC8664152 DOI: 10.1002/acm2.13446] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 09/16/2021] [Accepted: 09/24/2021] [Indexed: 11/12/2022] Open
Abstract
Radiotherapy (RT) datasets can suffer from variations in annotation of organ at risk (OAR) and target structures. Annotation standards exist, but their description for prostate targets is limited. This restricts the use of such data for supervised machine learning purposes as it requires properly annotated data. The aim of this work was to develop a modality independent deep learning (DL) model for automatic classification and annotation of prostate RT DICOM structures. Delineated prostate organs at risk (OAR), support- and target structures (gross tumor volume [GTV]/clinical target volume [CTV]/planning target volume [PTV]), along with or without separate vesicles and/or lymph nodes, were extracted as binary masks from 1854 patients. An image modality independent 2D InceptionResNetV2 classification network was trained with varying amounts of training data using four image input channels. Channel 1-3 consisted of orthogonal 2D projections from each individual binary structure. The fourth channel contained a summation of the other available binary structure masks. Structure classification performance was assessed in independent CT (n = 200 pat) and magnetic resonance imaging (MRI) (n = 40 pat) test datasets and an external CT (n = 99 pat) dataset from another clinic. A weighted classification accuracy of 99.4% was achieved during training. The unweighted classification accuracy and the weighted average F1 score among different structures in the CT test dataset were 98.8% and 98.4% and 98.6% and 98.5% for the MRI test dataset, respectively. The external CT dataset yielded the corresponding results 98.4% and 98.7% when analyzed for trained structures only, and results from the full dataset yielded 79.6% and 75.2%. Most misclassifications in the external CT dataset occurred due to multiple CTVs and PTVs being fused together, which was not included in the training data. Our proposed DL-based method for automated renaming and standardization of prostate radiotherapy annotations shows great potential. Clinic specific contouring standards however need to be represented in the training data for successful use. Source code is available at https://github.com/jamtheim/DicomRTStructRenamerPublic.
Collapse
Affiliation(s)
- Christian Jamtheim Gustafsson
- Department of Hematology Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden.,Department of Translational Sciences, Medical Radiation Physics, Lund University, Malmö, Sweden
| | - Michael Lempart
- Department of Hematology Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden.,Department of Translational Sciences, Medical Radiation Physics, Lund University, Malmö, Sweden
| | - Johan Swärd
- Centre for Mathematical Sciences, Mathematical Statistics, Lund University, Lund, Sweden
| | - Emilia Persson
- Department of Hematology Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden.,Department of Translational Sciences, Medical Radiation Physics, Lund University, Malmö, Sweden
| | - Tufve Nyholm
- Department of Radiation Sciences, Radiation Physics, Umeå University, Umeå, Sweden
| | | | - Jonas Scherman
- Department of Hematology Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden
| |
Collapse
|
9
|
Malamateniou C, McFadden S, McQuinlan Y, England A, Woznitza N, Goldsworthy S, Currie C, Skelton E, Chu KY, Alware N, Matthews P, Hawkesford R, Tucker R, Town W, Matthew J, Kalinka C, O'Regan T. Artificial Intelligence: Guidance for clinical imaging and therapeutic radiography professionals, a summary by the Society of Radiographers AI working group. Radiography (Lond) 2021; 27:1192-1202. [PMID: 34420888 DOI: 10.1016/j.radi.2021.07.028] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 07/30/2021] [Accepted: 07/31/2021] [Indexed: 10/20/2022]
Abstract
INTRODUCTION Artificial intelligence (AI) has started to be increasingly adopted in medical imaging and radiotherapy clinical practice, however research, education and partnerships have not really caught up yet to facilitate a safe and effective transition. The aim of the document is to provide baseline guidance for radiographers working in the field of AI in education, research, clinical practice and stakeholder partnerships. The guideline is intended for use by the multi-professional clinical imaging and radiotherapy teams, including all staff, volunteers, students and learners. METHODS The format mirrored similar publications from other SCoR working groups in the past. The recommendations have been subject to a rapid period of peer, professional and patient assessment and review. Feedback was sought from a range of SoR members and advisory groups, as well as from the SoR director of professional policy, as well as from external experts. Amendments were then made in line with feedback received and a final consensus was reached. RESULTS AI is an innovative tool radiographers will need to engage with to ensure a safe and efficient clinical service in imaging and radiotherapy. Educational provisions will need to be proportionately adjusted by Higher Education Institutions (HEIs) to offer the necessary knowledge, skills and competences for diagnostic and therapeutic radiographers, to enable them to navigate a future where AI will be central to patient diagnosis and treatment pathways. Radiography-led research in AI should address key clinical challenges and enable radiographers co-design, implement and validate AI solutions. Partnerships are key in ensuring the contribution of radiographers is integrated into healthcare AI ecosystems for the benefit of the patients and service users. CONCLUSION Radiography is starting to work towards a future with AI-enabled healthcare. This guidance offers some recommendations for different areas of radiography practice. There is a need to update our educational curricula, rethink our research priorities, forge new strong clinical-academic-industry partnerships to optimise clinical practice. Specific recommendations in relation to clinical practice, education, research and the forging of partnerships with key stakeholders are discussed, with potential impact on policy and practice in all these domains. These recommendations aim to serve as baseline guidance for UK radiographers. IMPLICATIONS FOR PRACTICE This review offers the most up-to-date recommendations for clinical practitioners, researchers, academics and service users of clinical imaging and therapeutic radiography services. Radiography practice, education and research must gradually adjust to AI-enabled healthcare systems to ensure gains of AI technologies are maximised and challenges and risks are minimised. This guidance will need to be updated regularly given the fast-changing pace of AI development and innovation.
Collapse
Affiliation(s)
- C Malamateniou
- Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, City, University of London, Northampton Square, London, EC1V 0HB, UK; Perinatal Imaging and Health, King's College, London, UK.
| | - S McFadden
- School of Health Sciences, Ulster University, Belfast, Northern Ireland, BT37OQB, UK
| | - Y McQuinlan
- Mirada Medical, UK; Honorary Dosimetrist, Guy's and St Thomas' NHS Trust, UK
| | - A England
- School of Allied Health Professions, Keele University, Staffordshire, UK
| | - N Woznitza
- Radiology Department, University College London Hospitals, UK; School of Allied and Public Health Professions Canterbury Christ Church University, UK
| | - S Goldsworthy
- Beacon Radiotherapy, Musgrove Park Hospital, Somerset NHS Foundation Trust, Taunton, TA1 5DA, UK
| | - C Currie
- Programme Lead MSc Diagnostic Imaging, Glasgow Caledonian University, UK; MRI Specialist Radiographer, Queen Elizabeth University Hospital, Glasgow, UK
| | - E Skelton
- Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, City, University of London, Northampton Square, London, EC1V 0HB, UK; Perinatal Imaging and Health, King's College, London, UK
| | - K-Y Chu
- Department of Oncology, University of Oxford, UK; Radiotherapy Department, Oxford University Hospitals, NHS FT, UK
| | - N Alware
- King George Hospital, BHRUT NHS Trust, London, UK
| | - P Matthews
- Diagnostic Imaging Department, Surrey & Sussex Healthcare NHS Trust, UK
| | | | - R Tucker
- School of Allied Health and Social Care, College of Health, Psychology and Social Care, University of Derby, UK; Radiology Department, Nottingham University Hospital NHS Trust, UK
| | - W Town
- Dartford and Gravesham NHS Trust, UK
| | - J Matthew
- Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, City, University of London, Northampton Square, London, EC1V 0HB, UK; School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, SE1 7EH, UK
| | - C Kalinka
- Society and College of Radiographers, UK; Programme Manager, Strategic Programme Unit, NHS Collaborative, Wales, United Kingdom
| | - T O'Regan
- The Society and College of Radiographers, 207 Providence Square, Mill Street, London, UK
| |
Collapse
|
10
|
Malamateniou C, Knapp KM, Pergola M, Woznitza N, Hardy M. Artificial intelligence in radiography: Where are we now and what does the future hold? Radiography (Lond) 2021; 27 Suppl 1:S58-S62. [PMID: 34380589 DOI: 10.1016/j.radi.2021.07.015] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 07/10/2021] [Accepted: 07/21/2021] [Indexed: 10/20/2022]
Abstract
OBJECTIVES This paper will outline the status and basic principles of artificial intelligence (AI) in radiography along with some thoughts and suggestions on what the future might hold. While the authors are not always able to separate the current status from future developments in this field, given the speed of innovation in AI, every effort has been made to give a view to the present with projections to the future. KEY FINDINGS AI is increasingly being integrated within radiography and radiographers will increasingly be working with AI based tools in the future. As new AI tools are developed it is essential that robust validation is undertaken in unseen data, supported by more prospective interdisciplinary research. A framework of stronger, more comprehensive approvals are recommended and the involvement of service users, including practitioners, patients and their carers in the design and implementation of AI tools is essential. Clearer accountability and medicolegal frameworks are required in cases of erroneous results from the use of AI-powered software and hardware. Clearer career pathways and role extension provision for healthcare practitioners, including radiographers, are required along with education in this field where AI will be central. CONCLUSION With the current growth rate of AI tools it is expected that many of the applications in medical imaging will continue to develop to more accurate, less expensive and more readily available versions moving from the bench to the bedside. The hope is that, alongside efficiency and increased patient throughput, patient centred care and precision medicine will find their way in, so we will not only deliver a faster, safer, seamless clinical service but also one that will have the patients at its heart. IMPACT FOR PRACTICE AI is already reaching clinical practice in many forms and its presence will continue to increase over the short and long-term future. Radiographers must learn to work with AI, embracing it and maximising the positive outcomes from this new technology.
Collapse
Affiliation(s)
| | | | - M Pergola
- American Society of Radiologic Technologists, NM, USA.
| | - N Woznitza
- University College London Hospitals, UK; Canterbury Christ Church University, UK
| | - M Hardy
- University of Bradford, Bradford, UK
| |
Collapse
|
11
|
Liu X, Li KW, Yang R, Geng LS. Review of Deep Learning Based Automatic Segmentation for Lung Cancer Radiotherapy. Front Oncol 2021; 11:717039. [PMID: 34336704 PMCID: PMC8323481 DOI: 10.3389/fonc.2021.717039] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Accepted: 06/21/2021] [Indexed: 12/14/2022] Open
Abstract
Lung cancer is the leading cause of cancer-related mortality for males and females. Radiation therapy (RT) is one of the primary treatment modalities for lung cancer. While delivering the prescribed dose to tumor targets, it is essential to spare the tissues near the targets-the so-called organs-at-risk (OARs). An optimal RT planning benefits from the accurate segmentation of the gross tumor volume and surrounding OARs. Manual segmentation is a time-consuming and tedious task for radiation oncologists. Therefore, it is crucial to develop automatic image segmentation to relieve radiation oncologists of the tedious contouring work. Currently, the atlas-based automatic segmentation technique is commonly used in clinical routines. However, this technique depends heavily on the similarity between the atlas and the image segmented. With significant advances made in computer vision, deep learning as a part of artificial intelligence attracts increasing attention in medical image automatic segmentation. In this article, we reviewed deep learning based automatic segmentation techniques related to lung cancer and compared them with the atlas-based automatic segmentation technique. At present, the auto-segmentation of OARs with relatively large volume such as lung and heart etc. outperforms the organs with small volume such as esophagus. The average Dice similarity coefficient (DSC) of lung, heart and liver are over 0.9, and the best DSC of spinal cord reaches 0.9. However, the DSC of esophagus ranges between 0.71 and 0.87 with a ragged performance. In terms of the gross tumor volume, the average DSC is below 0.8. Although deep learning based automatic segmentation techniques indicate significant superiority in many aspects compared to manual segmentation, various issues still need to be solved. We discussed the potential issues in deep learning based automatic segmentation including low contrast, dataset size, consensus guidelines, and network design. Clinical limitations and future research directions of deep learning based automatic segmentation were discussed as well.
Collapse
Affiliation(s)
- Xi Liu
- School of Physics, Beihang University, Beijing, China
| | - Kai-Wen Li
- School of Physics, Beihang University, Beijing, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology, Beihang University, Beijing, China
| | - Ruijie Yang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Li-Sheng Geng
- School of Physics, Beihang University, Beijing, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology, Beihang University, Beijing, China
- Beijing Key Laboratory of Advanced Nuclear Materials and Physics, Beihang University, Beijing, China
- School of Physics and Microelectronics, Zhengzhou University, Zhengzhou, China
| |
Collapse
|
12
|
Bosmans H, Zanca F, Gelaude F. Procurement, commissioning and QA of AI based solutions: An MPE's perspective on introducing AI in clinical practice. Phys Med 2021; 83:257-263. [PMID: 33984579 DOI: 10.1016/j.ejmp.2021.04.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 03/24/2021] [Accepted: 04/06/2021] [Indexed: 12/11/2022] Open
Abstract
PURPOSE In this study, we propose a framework to help the MPE take up a unique and important role at the introduction of AI solutions in clinical practice, and more in particular at procurement, acceptance, commissioning and QA. MATERIAL AND METHODS The steps for the introduction of Medical Radiological Equipment in a hospital setting were extrapolated to AI tools. Literature review and in-house experience was added to prepare similar, yet dedicated test methods. RESULTS Procurement starts from the clinical cases to be solved and is usually a complex process with many stakeholders and possibly many candidate AI solutions. Specific KPIs and metrics need to be defined. Acceptance testing follows, to verify the installation and test for critical exams. Commissioning should test the suitability of the AI tool for the intended use in the local institution. Results may be predicted from peer reviewed papers that treat representative populations. If not available, local data sets can be prepared to assess the KPIs, or 'virtual clinical trials' could be used to create large, simulated test data sets. Quality assurance must be performed periodically to verify if KPIs are stable, especially if the software is upscaled or upgraded, and as soon as self-learning AI tools would enter the medical practice. DISCUSSION MPEs are well placed to bridge between manufacturer and medical team and help from procurement up to reporting to the management board. More work is needed to establish consolidated test protocols.
Collapse
Affiliation(s)
- Hilde Bosmans
- University Hospitals of the KU Leuven, Leuven, Belgium.
| | | | | |
Collapse
|
13
|
Yakar M, Etiz D. Artificial intelligence in radiation oncology. Artif Intell Med Imaging 2021; 2:13-31. [DOI: 10.35711/aimi.v2.i2.13] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 03/30/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is a computer science that tries to mimic human-like intelligence in machines that use computer software and algorithms to perform specific tasks without direct human input. Machine learning (ML) is a subunit of AI that uses data-driven algorithms that learn to imitate human behavior based on a previous example or experience. Deep learning is an ML technique that uses deep neural networks to create a model. The growth and sharing of data, increasing computing power, and developments in AI have initiated a transformation in healthcare. Advances in radiation oncology have produced a significant amount of data that must be integrated with computed tomography imaging, dosimetry, and imaging performed before each fraction. Of the many algorithms used in radiation oncology, has advantages and limitations with different computational power requirements. The aim of this review is to summarize the radiotherapy (RT) process in workflow order by identifying specific areas in which quality and efficiency can be improved by ML. The RT stage is divided into seven stages: patient evaluation, simulation, contouring, planning, quality control, treatment application, and patient follow-up. A systematic evaluation of the applicability, limitations, and advantages of AI algorithms has been done for each stage.
Collapse
Affiliation(s)
- Melek Yakar
- Department of Radiation Oncology, Eskisehir Osmangazi University Faculty of Medicine, Eskisehir 26040, Turkey
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir 26040, Turkey
| | - Durmus Etiz
- Department of Radiation Oncology, Eskisehir Osmangazi University Faculty of Medicine, Eskisehir 26040, Turkey
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir 26040, Turkey
| |
Collapse
|