1
|
Tejani AS, Ng YS, Xi Y, Rayan JC. Understanding and Mitigating Bias in Imaging Artificial Intelligence. Radiographics 2024; 44:e230067. [PMID: 38635456 DOI: 10.1148/rg.230067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2024]
Abstract
Artificial intelligence (AI) algorithms are prone to bias at multiple stages of model development, with potential for exacerbating health disparities. However, bias in imaging AI is a complex topic that encompasses multiple coexisting definitions. Bias may refer to unequal preference to a person or group owing to preexisting attitudes or beliefs, either intentional or unintentional. However, cognitive bias refers to systematic deviation from objective judgment due to reliance on heuristics, and statistical bias refers to differences between true and expected values, commonly manifesting as systematic error in model prediction (ie, a model with output unrepresentative of real-world conditions). Clinical decisions informed by biased models may lead to patient harm due to action on inaccurate AI results or exacerbate health inequities due to differing performance among patient populations. However, while inequitable bias can harm patients in this context, a mindful approach leveraging equitable bias can address underrepresentation of minority groups or rare diseases. Radiologists should also be aware of bias after AI deployment such as automation bias, or a tendency to agree with automated decisions despite contrary evidence. Understanding common sources of imaging AI bias and the consequences of using biased models can guide preventive measures to mitigate its impact. Accordingly, the authors focus on sources of bias at stages along the imaging machine learning life cycle, attempting to simplify potentially intimidating technical terminology for general radiologists using AI tools in practice or collaborating with data scientists and engineers for AI tool development. The authors review definitions of bias in AI, describe common sources of bias, and present recommendations to guide quality control measures to mitigate the impact of bias in imaging AI. Understanding the terms featured in this article will enable a proactive approach to identifying and mitigating bias in imaging AI. Published under a CC BY 4.0 license. Test Your Knowledge questions for this article are available in the supplemental material. See the invited commentary by Rouzrokh and Erickson in this issue.
Collapse
Affiliation(s)
- Ali S Tejani
- From the Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
| | - Yee Seng Ng
- From the Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
| | - Yin Xi
- From the Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
| | - Jesse C Rayan
- From the Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Dallas, TX 75390
| |
Collapse
|
2
|
Zhu Z, Ma X, Wang W, Dong S, Wang K, Wu L, Luo G, Wang G, Li S. Boosting knowledge diversity, accuracy, and stability via tri-enhanced distillation for domain continual medical image segmentation. Med Image Anal 2024; 94:103112. [PMID: 38401270 DOI: 10.1016/j.media.2024.103112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 01/10/2024] [Accepted: 02/20/2024] [Indexed: 02/26/2024]
Abstract
Domain continual medical image segmentation plays a crucial role in clinical settings. This approach enables segmentation models to continually learn from a sequential data stream across multiple domains. However, it faces the challenge of catastrophic forgetting. Existing methods based on knowledge distillation show potential to address this challenge via a three-stage process: distillation, transfer, and fusion. Yet, each stage presents its unique issues that, collectively, amplify the problem of catastrophic forgetting. To address these issues at each stage, we propose a tri-enhanced distillation framework. (1) Stochastic Knowledge Augmentation reduces redundancy in knowledge, thereby increasing both the diversity and volume of knowledge derived from the old network. (2) Adaptive Knowledge Transfer selectively captures critical information from the old knowledge, facilitating a more accurate knowledge transfer. (3) Global Uncertainty-Guided Fusion introduces a global uncertainty view of the dataset to fuse the old and new knowledge with reduced bias, promoting a more stable knowledge fusion. Our experimental results not only validate the feasibility of our approach, but also demonstrate its superior performance compared to state-of-the-art methods. We suggest that our innovative tri-enhanced distillation framework may establish a robust benchmark for domain continual medical image segmentation.
Collapse
Affiliation(s)
- Zhanshi Zhu
- Faculty of Computing, Harbin Institute of Technology, Harbin, China
| | - Xinghua Ma
- Faculty of Computing, Harbin Institute of Technology, Harbin, China
| | - Wei Wang
- Faculty of Computing, Harbin Institute of Technology, Shenzhen, China.
| | - Suyu Dong
- College of Computer and Control Engineering, Northeast Forestry University, Harbin, China
| | - Kuanquan Wang
- Faculty of Computing, Harbin Institute of Technology, Harbin, China.
| | - Lianming Wu
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Gongning Luo
- Faculty of Computing, Harbin Institute of Technology, Harbin, China.
| | - Guohua Wang
- College of Computer and Control Engineering, Northeast Forestry University, Harbin, China
| | - Shuo Li
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| |
Collapse
|
3
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Dos Santos DP, Tang A, Wald C, Slavotinek J. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement From the ACR, CAR, ESR, RANZCR & RSNA. Can Assoc Radiol J 2024; 75:226-244. [PMID: 38251882 DOI: 10.1177/08465371231222229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024] Open
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever‑growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi‑society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, AL, USA
- Data Science Institute, American College of Radiology, Reston, VA, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, CA, USA
- Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, CA, USA
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, SA, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montréal, QC, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, MA, USA
- Tufts University Medical School, Boston, MA, USA
- American College of Radiology, Reston, VA, USA
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, SA, Australia
- College of Medicine and Public Health, Flinders University, Adelaide, SA, Australia
| |
Collapse
|
4
|
Dadras AA, Aichinger P. Deep Learning-Based Detection of Glottis Segmentation Failures. Bioengineering (Basel) 2024; 11:443. [PMID: 38790311 PMCID: PMC11118004 DOI: 10.3390/bioengineering11050443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2024] [Revised: 04/23/2024] [Accepted: 04/26/2024] [Indexed: 05/26/2024] Open
Abstract
Medical image segmentation is crucial for clinical applications, but challenges persist due to noise and variability. In particular, accurate glottis segmentation from high-speed videos is vital for voice research and diagnostics. Manual searching for failed segmentations is labor-intensive, prompting interest in automated methods. This paper proposes the first deep learning approach for detecting faulty glottis segmentations. For this purpose, faulty segmentations are generated by applying both a poorly performing neural network and perturbation procedures to three public datasets. Heavy data augmentations are added to the input until the neural network's performance decreases to the desired mean intersection over union (IoU). Likewise, the perturbation procedure involves a series of image transformations to the original ground truth segmentations in a randomized manner. These data are then used to train a ResNet18 neural network with custom loss functions to predict the IoU scores of faulty segmentations. This value is then thresholded with a fixed IoU of 0.6 for classification, thereby achieving 88.27% classification accuracy with 91.54% specificity. Experimental results demonstrate the effectiveness of the presented approach. Contributions include: (i) a knowledge-driven perturbation procedure, (ii) a deep learning framework for scoring and detecting faulty glottis segmentations, and (iii) an evaluation of custom loss functions.
Collapse
Affiliation(s)
| | - Philipp Aichinger
- Speech and Hearing Science Lab, Division of Phoniatrics-Logopedics, Department of Otorhinolaryngology, Medical University of Vienna, Währinger Gürtel 18-20, 1090 Vienna, Austria;
| |
Collapse
|
5
|
Davis SE, Embí PJ, Matheny ME. Sustainable deployment of clinical prediction tools-a 360° approach to model maintenance. J Am Med Inform Assoc 2024; 31:1195-1198. [PMID: 38422379 PMCID: PMC11031208 DOI: 10.1093/jamia/ocae036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 02/15/2024] [Indexed: 03/02/2024] Open
Abstract
BACKGROUND As the enthusiasm for integrating artificial intelligence (AI) into clinical care grows, so has our understanding of the challenges associated with deploying impactful and sustainable clinical AI models. Complex dataset shifts resulting from evolving clinical environments strain the longevity of AI models as predictive accuracy and associated utility deteriorate over time. OBJECTIVE Responsible practice thus necessitates the lifecycle of AI models be extended to include ongoing monitoring and maintenance strategies within health system algorithmovigilance programs. We describe a framework encompassing a 360° continuum of preventive, preemptive, responsive, and reactive approaches to address model monitoring and maintenance from critically different angles. DISCUSSION We describe the complementary advantages and limitations of these four approaches and highlight the importance of such a coordinated strategy to help ensure the promise of clinical AI is not short-lived.
Collapse
Affiliation(s)
- Sharon E Davis
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
| | - Peter J Embí
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, United States
| | - Michael E Matheny
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, United States
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Geriatric Research, Education, and Clinical Care, Tennessee Valley Healthcare System VA Medical Center, Veterans Health Administration, Nashville, TN 37212, United States
| |
Collapse
|
6
|
Nomura Y, Hanaoka S, Hayashi N, Yoshikawa T, Koshino S, Sato C, Tatsuta M, Tanaka Y, Kano S, Nakaya M, Inui S, Kusakabe M, Nakao T, Miki S, Watadani T, Nakaoka R, Shimizu A, Abe O. Performance changes due to differences among annotating radiologists for training data in computerized lesion detection. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03136-9. [PMID: 38625446 DOI: 10.1007/s11548-024-03136-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 03/28/2024] [Indexed: 04/17/2024]
Abstract
PURPOSE The quality and bias of annotations by annotators (e.g., radiologists) affect the performance changes in computer-aided detection (CAD) software using machine learning. We hypothesized that the difference in the years of experience in image interpretation among radiologists contributes to annotation variability. In this study, we focused on how the performance of CAD software changes with retraining by incorporating cases annotated by radiologists with varying experience. METHODS We used two types of CAD software for lung nodule detection in chest computed tomography images and cerebral aneurysm detection in magnetic resonance angiography images. Twelve radiologists with different years of experience independently annotated the lesions, and the performance changes were investigated by repeating the retraining of the CAD software twice, with the addition of cases annotated by each radiologist. Additionally, we investigated the effects of retraining using integrated annotations from multiple radiologists. RESULTS The performance of the CAD software after retraining differed among annotating radiologists. In some cases, the performance was degraded compared to that of the initial software. Retraining using integrated annotations showed different performance trends depending on the target CAD software, notably in cerebral aneurysm detection, where the performance decreased compared to using annotations from a single radiologist. CONCLUSIONS Although the performance of the CAD software after retraining varied among the annotating radiologists, no direct correlation with their experience was found. The performance trends differed according to the type of CAD software used when integrated annotations from multiple radiologists were used.
Collapse
Affiliation(s)
- Yukihiro Nomura
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, 263-8522, Japan.
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Saori Koshino
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Chiaki Sato
- Department of Radiology, Tokyo Metropolitan Bokutoh Hospital, Tokyo, Japan
| | - Momoko Tatsuta
- Department of Diagnostic Radiology, Kitasato University Hospital, Sagamihara, Kanagawa, Japan
| | - Yuya Tanaka
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Shintaro Kano
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Moto Nakaya
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Shohei Inui
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | | | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Takeyuki Watadani
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Ryusuke Nakaoka
- Division of Medical Devices, National Institute of Health Sciences, Kawasaki, Kanagawa, Japan
| | - Akinobu Shimizu
- Institute of Engineering, Tokyo University of Agriculture and Technology, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
7
|
Davis MA, Wu O, Ikuta I, Jordan JE, Johnson MH, Quigley E. Understanding Bias in Artificial Intelligence: A Practice Perspective. AJNR Am J Neuroradiol 2024; 45:371-373. [PMID: 38123951 DOI: 10.3174/ajnr.a8070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 10/17/2023] [Indexed: 12/23/2023]
Abstract
In the fall of 2021, several experts in this space delivered a Webinar hosted by the American Society of Neuroradiology (ASNR) Diversity and Inclusion Committee, focused on expanding the understanding of bias in artificial intelligence, with a health equity lens, and provided key concepts for neuroradiologists to approach the evaluation of these tools. In this perspective, we distill key parts of this discussion, including understanding why this topic is important to neuroradiologists and lending insight on how neuroradiologists can develop a framework to assess health equity-related bias in artificial intelligence tools. In addition, we provide examples of clinical workflow implementation of these tools so that we can begin to see how artificial intelligence tools will impact discourse on equitable radiologic care. As continuous learners, we must be engaged in new and rapidly evolving technologies that emerge in our field. The Diversity and Inclusion Committee of the ASNR has addressed this subject matter through its programming content revolving around health equity in neuroradiologic advances.
Collapse
Affiliation(s)
- Melissa A Davis
- From Yale University (M.A.D., M.H.J.), New Haven, Connecticut
| | - Ona Wu
- Massachusetts General Hospital (O.W.), Charlestown, Massachusetts
| | - Ichiro Ikuta
- Mayo Clinic Arizona, Department of Radiology (I.I.), Phoenix, Arizona
| | - John E Jordan
- Stanford University School of Medicine (J.E.J.), Stanford, California
| | | | | |
Collapse
|
8
|
Kim C, Gadgil SU, DeGrave AJ, Omiye JA, Cai ZR, Daneshjou R, Lee SI. Transparent medical image AI via an image-text foundation model grounded in medical literature. Nat Med 2024; 30:1154-1165. [PMID: 38627560 DOI: 10.1038/s41591-024-02887-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 02/27/2024] [Indexed: 04/21/2024]
Abstract
Building trustworthy and transparent image-based medical artificial intelligence (AI) systems requires the ability to interrogate data and models at all stages of the development pipeline, from training models to post-deployment monitoring. Ideally, the data and associated AI systems could be described using terms already familiar to physicians, but this requires medical datasets densely annotated with semantically meaningful concepts. In the present study, we present a foundation model approach, named MONET (medical concept retriever), which learns how to connect medical images with text and densely scores images on concept presence to enable important tasks in medical AI development and deployment such as data auditing, model auditing and model interpretation. Dermatology provides a demanding use case for the versatility of MONET, due to the heterogeneity in diseases, skin tones and imaging modalities. We trained MONET based on 105,550 dermatological images paired with natural language descriptions from a large collection of medical literature. MONET can accurately annotate concepts across dermatology images as verified by board-certified dermatologists, competitively with supervised models built on previously concept-annotated dermatology datasets of clinical images. We demonstrate how MONET enables AI transparency across the entire AI system development pipeline, from building inherently interpretable models to dataset and model auditing, including a case study dissecting the results of an AI clinical trial.
Collapse
Affiliation(s)
- Chanwoo Kim
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA
| | - Soham U Gadgil
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA
| | - Alex J DeGrave
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA
- Medical Scientist Training Program, University of Washington, Seattle, WA, USA
| | - Jesutofunmi A Omiye
- Department of Dermatology, Stanford School of Medicine, Stanford, CA, USA
- Department of Biomedical Data Science, Stanford School of Medicine, Stanford, CA, USA
| | - Zhuo Ran Cai
- Program for Clinical Research and Technology, Stanford University, Stanford, CA, USA
| | - Roxana Daneshjou
- Department of Dermatology, Stanford School of Medicine, Stanford, CA, USA.
- Department of Biomedical Data Science, Stanford School of Medicine, Stanford, CA, USA.
| | - Su-In Lee
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA.
| |
Collapse
|
9
|
Moskalenko V, Kharchenko V. Resilience-aware MLOps for AI-based medical diagnostic system. Front Public Health 2024; 12:1342937. [PMID: 38601490 PMCID: PMC11004236 DOI: 10.3389/fpubh.2024.1342937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 03/15/2024] [Indexed: 04/12/2024] Open
Abstract
Background The healthcare sector demands a higher degree of responsibility, trustworthiness, and accountability when implementing Artificial Intelligence (AI) systems. Machine learning operations (MLOps) for AI-based medical diagnostic systems are primarily focused on aspects such as data quality and confidentiality, bias reduction, model deployment, performance monitoring, and continuous improvement. However, so far, MLOps techniques do not take into account the need to provide resilience to disturbances such as adversarial attacks, including fault injections, and drift, including out-of-distribution. This article is concerned with the MLOps methodology that incorporates the steps necessary to increase the resilience of an AI-based medical diagnostic system against various kinds of disruptive influences. Methods Post-hoc resilience optimization, post-hoc predictive uncertainty calibration, uncertainty monitoring, and graceful degradation are incorporated as additional stages in MLOps. To optimize the resilience of the AI based medical diagnostic system, additional components in the form of adapters and meta-adapters are utilized. These components are fine-tuned during meta-training based on the results of adaptation to synthetic disturbances. Furthermore, an additional model is introduced for post-hoc calibration of predictive uncertainty. This model is trained using both in-distribution and out-of-distribution data to refine predictive confidence during the inference mode. Results The structure of resilience-aware MLOps for medical diagnostic systems has been proposed. Experimentally confirmed increase of robustness and speed of adaptation for medical image recognition system during several intervals of the system's life cycle due to the use of resilience optimization and uncertainty calibration stages. The experiments were performed on the DermaMNIST dataset, BloodMNIST and PathMNIST. ResNet-18 as a representative of convolutional networks and MedViT-T as a representative of visual transformers are considered. It is worth noting that transformers exhibited lower resilience than convolutional networks, although this observation may be attributed to potential imperfections in the architecture of adapters and meta-adapters. Сonclusion The main novelty of the suggested resilience-aware MLOps methodology and structure lie in the separating possibilities and activities on creating a basic model for normal operating conditions and ensuring its resilience and trustworthiness. This is significant for the medical applications as the developer of the basic model should devote more time to comprehending medical field and the diagnostic task at hand, rather than specializing in system resilience. Resilience optimization increases robustness to disturbances and speed of adaptation. Calibrated confidences ensure the recognition of a portion of unabsorbed disturbances to mitigate their impact, thereby enhancing trustworthiness.
Collapse
Affiliation(s)
- Viacheslav Moskalenko
- Department of Computer Science, Faculty of Electronics and Information Technologies, Sumy State University, Sumy, Ukraine
| | - Vyacheslav Kharchenko
- Department of Computer Systems, Network and Cybersecurity, Faculty of Radio-Electronics, Computer Systems and Infocommunications, National Aerospace University “KhAI”, Kharkiv, Ukraine
| |
Collapse
|
10
|
Fraioli F, Albert N, Boellaard R, Galazzo IB, Brendel M, Buvat I, Castellaro M, Cecchin D, Fernandez PA, Guedj E, Hammers A, Kaplar Z, Morbelli S, Papp L, Shi K, Tolboom N, Traub-Weidinger T, Verger A, Van Weehaeghe D, Yakushev I, Barthel H. Perspectives of the European Association of Nuclear Medicine on the role of artificial intelligence (AI) in molecular brain imaging. Eur J Nucl Med Mol Imaging 2024; 51:1007-1011. [PMID: 38097746 DOI: 10.1007/s00259-023-06553-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Affiliation(s)
- Francesco Fraioli
- Institute of Nuclear Medicine, University College London Hospitals, 5Th Floor UCH, 235 Euston Rd, London, NW1 2BU, UK.
| | - Nathalie Albert
- Department of Nuclear Medicine, Ludwig-Maximilians-University of Munich, Munich, Germany
| | - Ronald Boellaard
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location VUmc, Amsterdam, The Netherlands
| | | | - Matthias Brendel
- Department of Nuclear Medicine, Ludwig-Maximilians-University of Munich, Munich, Germany
| | - Irene Buvat
- Institut Curie - Inserm Laboratory of Translational Imaging in Oncology, Paris, France
| | - Marco Castellaro
- Department of Information Engineering, University-Hospital of Padova, Padua, Italy
| | - Diego Cecchin
- Nuclear Medicine Unit, Department of Medicine - DIMED, University-Hospital of Padova, Padua, Italy
| | - Pablo Aguiar Fernandez
- CIMUS, Universidade Santiago de Compostela & Nuclear Medicine Dept, Univ. Hospital IDIS, Santiago de Compostela, Spain
| | - Eric Guedj
- Département de Médecine Nucléaire, Aix Marseille Univ, APHM, CNRS, Centrale Marseille, Institut Fresnel, Hôpital de La Timone, CERIMED, Marseille, France
| | - Alexander Hammers
- School of Biomedical Engineering and Imaging Sciences, King's College London St Thomas' Hospital, London, SE1 7EH, UK
| | - Zoltan Kaplar
- Institute of Nuclear Medicine, University College London Hospitals, 5Th Floor UCH, 235 Euston Rd, London, NW1 2BU, UK
| | - Silvia Morbelli
- Nuclear Medicine Unit, AOU Città Della Salute E Della Scienza Di Torino, University of Turin, Turin, Italy
| | - Laszlo Papp
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Kuangyu Shi
- Lab for Artificial Intelligence and Translational Theranostic, Dept. of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Nelleke Tolboom
- Department of Radiology and Nuclear Medicine, Utrecht University Medical Center, Utrecht, The Netherlands
| | - Tatjana Traub-Weidinger
- Division of Nuclear Medicine, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Antoine Verger
- Department of Nuclear Medicine and Nancyclotep Imaging Platform, CHRU Nancy, Université de Lorraine, IADI, INSERM U1254, Nancy, France
| | - Donatienne Van Weehaeghe
- Department of Radiology and Nuclear Medicine, Ghent University Hospital, C. Heymanslaan 10, 9000, Ghent, Belgium
| | - Igor Yakushev
- Department of Nuclear Medicine, School of Medicine, Technical University of Munich, Munich, Germany
| | - Henryk Barthel
- Department of Nuclear Medicine, Leipzig University Medical Centre, Leipzig, Germany
| |
Collapse
|
11
|
Young A, Tan K, Tariq F, Jin MX, Bluestone AY. Rogue AI: Cautionary Cases in Neuroradiology and What We Can Learn From Them. Cureus 2024; 16:e56317. [PMID: 38628986 PMCID: PMC11019475 DOI: 10.7759/cureus.56317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/16/2024] [Indexed: 04/19/2024] Open
Abstract
Introduction In recent years, artificial intelligence (AI) in medical imaging has undergone unprecedented innovation and advancement, sparking a revolutionary transformation in healthcare. The field of radiology is particularly implicated, as clinical radiologists are expected to interpret an ever-increasing number of complex cases in record time. Machine learning software purchased by our institution is expected to help our radiologists come to a more prompt diagnosis by delivering point-of-care quantitative analysis of suspicious findings and streamlining clinical workflow. This paper explores AI's impact on neuroradiology, an area accounting for a substantial portion of recent radiology studies. We present a case series evaluating an AI software's performance in detecting neurovascular findings, highlighting five cases where AI interpretations differed from radiologists' assessments. Our study underscores common pitfalls of AI in the context of CT head angiograms, aiming to guide future AI algorithms. Methods We conducted a retrospective case series study at Stony Brook University Hospital, a large medical center in Stony Brook, New York, spanning from October 1, 2021 to December 31, 2021, analyzing 140 randomly sampled CT angiograms using AI software. This software assessed various neurovascular parameters, and AI findings were compared with neuroradiologists' interpretations. Five cases with divergent interpretations were selected for detailed analysis. Results Five representative cases in which AI findings were discordant with radiologists' interpretations are presented with diagnoses including diffuse anoxic ischemic injury, cortical laminar necrosis, colloid cyst, right superficial temporal artery-to-middle cerebral artery (STA-MCA) bypass, and subacute bilateral subdural hematomas. Discussion The errors identified in our case series expose AI's limitations in radiology. Our case series reveals that AI's incorrect interpretations can stem from complexities in pathology, challenges in distinguishing densities, inability to identify artifacts, identifying post-surgical changes in normal anatomy, sensitivity limitations, and insufficient pattern recognition. AI's potential for improvement lies in refining its algorithms to effectively recognize and differentiate pathologies. Incorporating more diverse training datasets, multimodal data, deep-reinforcement learning, clinical context, and real-time learning capabilities are some ways to improve AI's performance in the field of radiology. Conclusion Overall, it is apparent that AI applications in radiology have much room for improvement before becoming more widely integrated into clinical workflows. While AI demonstrates remarkable potential to aid in diagnosis and streamline workflows, our case series highlights common pitfalls that underscore the need for continuous improvement. By refining algorithms, incorporating diverse datasets, embracing multimodal information, and leveraging innovative machine learning strategies, AI's diagnostic accuracy can be significantly improved.
Collapse
Affiliation(s)
- Austin Young
- Department of Radiology, Stony Brook University Hospital, Stony Brook, USA
| | - Kevin Tan
- Department of Radiology, Stony Brook University Hospital, Stony Brook, USA
| | - Faiq Tariq
- Department of Radiology, Stony Brook University Hospital, Stony Brook, USA
| | - Michael X Jin
- Department of Radiology, Stony Brook University Hospital, Stony Brook, USA
| | | |
Collapse
|
12
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Pinto Dos Santos D, Tang A, Wald C, Slavotinek J. Developing, purchasing, implementing and monitoring AI tools in radiology: Practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA. J Med Imaging Radiat Oncol 2024; 68:7-26. [PMID: 38259140 DOI: 10.1111/1754-9485.13612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 11/23/2023] [Indexed: 01/24/2024]
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, Alabama, USA
- American College of Radiology Data Science Institute, Reston, Virginia, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, California, USA
- Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, California, USA
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, South Australia, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montreal, Quebec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, Massachusetts, USA
- Tufts University Medical School, Boston, Massachusetts, USA
- Commision On Informatics, and Member, Board of Chancellors, American College of Radiology, Reston, Virginia, USA
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, Adelaide, South Australia, Australia
- College of Medicine and Public Health, Flinders University, Adelaide, South Australia, Australia
| |
Collapse
|
13
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Pinto Dos Santos D, Tang A, Wald C, Slavotinek J. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement From the ACR, CAR, ESR, RANZCR & RSNA. J Am Coll Radiol 2024:S1546-1440(23)01020-7. [PMID: 38276923 DOI: 10.1016/j.jacr.2023.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
Artificial intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. KEY POINTS.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, Alabama; American College of Radiology Data Science Institute, Reston, Virginia
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, California; Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, California
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, California
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany; Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, Massachusetts; Tufts University Medical School, Boston, Massachusetts; Commision on Informatics, and Member, Board of Chancellors, American College of Radiology, Virginia
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, Adelaide, Australia; College of Medicine and Public Health, Flinders University, Adelaide, Australia
| |
Collapse
|
14
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Dos Santos DP, Tang A, Wald C, Slavotinek J. Developing, purchasing, implementing and monitoring AI tools in radiology: practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA. Insights Imaging 2024; 15:16. [PMID: 38246898 PMCID: PMC10800328 DOI: 10.1186/s13244-023-01541-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024] Open
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones.This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.Key points • The incorporation of artificial intelligence (AI) in radiological practice demands increased monitoring of its utility and safety.• Cooperation between developers, clinicians, and regulators will allow all involved to address ethical issues and monitor AI performance.• AI can fulfil its promise to advance patient well-being if all steps from development to integration in healthcare are rigorously evaluated.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, AL, USA
- American College of Radiology Data Science Institute, Reston, VA, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, CA, USA
- Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, CA, USA
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, MA, USA
- Tufts University Medical School, Boston, MA, USA
- Commision On Informatics, and Member, Board of Chancellors, American College of Radiology, Virginia, USA
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, Adelaide, Australia
- College of Medicine and Public Health, Flinders University, Adelaide, Australia
| |
Collapse
|
15
|
Tripathi S, Tabari A, Mansur A, Dabbara H, Bridge CP, Daye D. From Machine Learning to Patient Outcomes: A Comprehensive Review of AI in Pancreatic Cancer. Diagnostics (Basel) 2024; 14:174. [PMID: 38248051 PMCID: PMC10814554 DOI: 10.3390/diagnostics14020174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 12/28/2023] [Accepted: 12/29/2023] [Indexed: 01/23/2024] Open
Abstract
Pancreatic cancer is a highly aggressive and difficult-to-detect cancer with a poor prognosis. Late diagnosis is common due to a lack of early symptoms, specific markers, and the challenging location of the pancreas. Imaging technologies have improved diagnosis, but there is still room for improvement in standardizing guidelines. Biopsies and histopathological analysis are challenging due to tumor heterogeneity. Artificial Intelligence (AI) revolutionizes healthcare by improving diagnosis, treatment, and patient care. AI algorithms can analyze medical images with precision, aiding in early disease detection. AI also plays a role in personalized medicine by analyzing patient data to tailor treatment plans. It streamlines administrative tasks, such as medical coding and documentation, and provides patient assistance through AI chatbots. However, challenges include data privacy, security, and ethical considerations. This review article focuses on the potential of AI in transforming pancreatic cancer care, offering improved diagnostics, personalized treatments, and operational efficiency, leading to better patient outcomes.
Collapse
Affiliation(s)
- Satvik Tripathi
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| | - Azadeh Tabari
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Harvard Medical School, Boston, MA 02115, USA
| | - Arian Mansur
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Harvard Medical School, Boston, MA 02115, USA
| | - Harika Dabbara
- Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA;
| | - Christopher P. Bridge
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| | - Dania Daye
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
16
|
Kazimierczak N, Kazimierczak W, Serafin Z, Nowicki P, Nożewski J, Janiszewska-Olszowska J. AI in Orthodontics: Revolutionizing Diagnostics and Treatment Planning-A Comprehensive Review. J Clin Med 2024; 13:344. [PMID: 38256478 PMCID: PMC10816993 DOI: 10.3390/jcm13020344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Revised: 12/29/2023] [Accepted: 01/05/2024] [Indexed: 01/24/2024] Open
Abstract
The advent of artificial intelligence (AI) in medicine has transformed various medical specialties, including orthodontics. AI has shown promising results in enhancing the accuracy of diagnoses, treatment planning, and predicting treatment outcomes. Its usage in orthodontic practices worldwide has increased with the availability of various AI applications and tools. This review explores the principles of AI, its applications in orthodontics, and its implementation in clinical practice. A comprehensive literature review was conducted, focusing on AI applications in dental diagnostics, cephalometric evaluation, skeletal age determination, temporomandibular joint (TMJ) evaluation, decision making, and patient telemonitoring. Due to study heterogeneity, no meta-analysis was possible. AI has demonstrated high efficacy in all these areas, but variations in performance and the need for manual supervision suggest caution in clinical settings. The complexity and unpredictability of AI algorithms call for cautious implementation and regular manual validation. Continuous AI learning, proper governance, and addressing privacy and ethical concerns are crucial for successful integration into orthodontic practice.
Collapse
Affiliation(s)
- Natalia Kazimierczak
- Kazimierczak Private Medical Practice, Dworcowa 13/u6a, 85-009 Bydgoszcz, Poland
| | - Wojciech Kazimierczak
- Kazimierczak Private Medical Practice, Dworcowa 13/u6a, 85-009 Bydgoszcz, Poland
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
| | - Zbigniew Serafin
- Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
| | - Paweł Nowicki
- Kazimierczak Private Medical Practice, Dworcowa 13/u6a, 85-009 Bydgoszcz, Poland
| | - Jakub Nożewski
- Department of Emeregncy Medicine, University Hospital No 2 in Bydgoszcz, Ujejskiego 75, 85-168 Bydgoszcz, Poland
| | | |
Collapse
|
17
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, dos Santos DP, Tang A, Wald C, Slavotinek J. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement from the ACR, CAR, ESR, RANZCR and RSNA. Radiol Artif Intell 2024; 6:e230513. [PMID: 38251899 PMCID: PMC10831521 DOI: 10.1148/ryai.230513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. This article is simultaneously published in Insights into Imaging (DOI 10.1186/s13244-023-01541-3), Journal of Medical Imaging and Radiation Oncology (DOI 10.1111/1754-9485.13612), Canadian Association of Radiologists Journal (DOI 10.1177/08465371231222229), Journal of the American College of Radiology (DOI 10.1016/j.jacr.2023.12.005), and Radiology: Artificial Intelligence (DOI 10.1148/ryai.230513). Keywords: Artificial Intelligence, Radiology, Automation, Machine Learning Published under a CC BY 4.0 license. ©The Author(s) 2024. Editor's Note: The RSNA Board of Directors has endorsed this article. It has not undergone review or editing by this journal.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical
Center, Birmingham, AL, USA
- American College of Radiology Data Science
Institute, Reston, VA, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich
School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and
Interventional Radiology, Medical Center, Faculty of Medicine, University of
Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, CA,
USA
- Stanford Center for Artificial
Intelligence in Medicine & Imaging, Palo Alto, CA, USA
| | - John Mongan
- Department of Radiology and Biomedical
Imaging, University of California, San Francisco, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning,
University of Adelaide, Adelaide, Australia
| | - Daniel Pinto dos Santos
- Department of Radiology, University
Hospital of Cologne, Cologne, Germany
- Department of Radiology, University
Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation
Oncology, and Nuclear Medicine, Université de Montréal,
Montréal, Québec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital
& Medical Center, Burlington, MA, USA
- Tufts University Medical School, Boston,
MA, USA
- Commission On Informatics, and Member,
Board of Chancellors, American College of Radiology, Virginia, USA
| | - John Slavotinek
- South Australia Medical Imaging,
Flinders Medical Centre Adelaide, Adelaide, Australia
- College of Medicine and Public Health,
Flinders University, Adelaide, Australia
| |
Collapse
|
18
|
Rodler S, Kopliku R, Ulrich D, Kaltenhauser A, Casuscelli J, Eismann L, Waidelich R, Buchner A, Butz A, Cacciamani GE, Stief CG, Westhofen T. Patients' Trust in Artificial Intelligence-based Decision-making for Localized Prostate Cancer: Results from a Prospective Trial. Eur Urol Focus 2023:S2405-4569(23)00237-7. [PMID: 37923632 DOI: 10.1016/j.euf.2023.10.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 10/04/2023] [Accepted: 10/21/2023] [Indexed: 11/07/2023]
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to enhance diagnostic accuracy and improve treatment outcomes. However, AI integration into clinical workflows and patient perspectives remain unclear. OBJECTIVE To determine patients' trust in AI and their perception of urologists relying on AI, and future diagnostic and therapeutic AI applications for patients. DESIGN, SETTING, AND PARTICIPANTS A prospective trial was conducted involving patients who received diagnostic or therapeutic interventions for prostate cancer (PC). INTERVENTION Patients were asked to complete a survey before magnetic resonance imaging, prostate biopsy, or radical prostatectomy. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS The primary outcome was patient trust in AI. Secondary outcomes were the choice of AI in treatment settings and traits attributed to AI and urologists. RESULTS AND LIMITATIONS Data for 466 patients were analyzed. The cumulative affinity for technology was positively correlated with trust in AI (correlation coefficient 0.094; p = 0.04), whereas patient age, level of education, and subjective perception of illness were not (p > 0.05). The mean score (± standard deviation) for trust in capability was higher for physicians than for AI for responding in an individualized way when communicating a diagnosis (4.51 ± 0.76 vs 3.38 ± 1.07; mean difference [MD] 1.130, 95% confidence interval [CI] 1.010-1.250; t924 = 18.52, p < 0.001; Cohen's d = 1.040) and for explaining information in an understandable way (4.57 ± vs 3.18 ± 1.09; MD 1.392, 95% CI 1.275-1.509; t921 = 27.27, p < 0.001; Cohen's d = 1.216). Patients stated that they had higher trust in a diagnosis made by AI controlled by a physician versus AI not controlled by a physician (4.31 ± 0.88 vs 1.75 ± 0.93; MD 2.561, 95% CI 2.444-2.678; t925 = 42.89, p < 0.001; Cohen's d = 2.818). AI-assisted physicians (66.74%) were preferred over physicians alone (29.61%), physicians controlled by AI (2.36%), and AI alone (0.64%) for treatment in the current clinical scenario. CONCLUSIONS Trust in future diagnostic and therapeutic AI-based treatment relies on optimal integration with urologists as the human-machine interface to leverage human and AI capabilities. PATIENT SUMMARY Artificial intelligence (AI) will play a role in diagnostic decisions in prostate cancer in the future. At present, patients prefer AI-assisted urologists over urologists alone, AI alone, and AI-controlled urologists. Specific traits of AI and urologists could be used to optimize diagnosis and treatment for patients with prostate cancer.
Collapse
Affiliation(s)
- Severin Rodler
- Department of Urology, LMU University Hospital, Munich, Germany; USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
| | - Rega Kopliku
- Department of Urology, LMU University Hospital, Munich, Germany
| | - Daniel Ulrich
- Department of Informatics, Ludwig-Maximilian-Universität München, Munich, Germany
| | - Annika Kaltenhauser
- Department of Informatics, Ludwig-Maximilian-Universität München, Munich, Germany
| | | | - Lennert Eismann
- Department of Urology, LMU University Hospital, Munich, Germany
| | | | | | - Andreas Butz
- Department of Informatics, Ludwig-Maximilian-Universität München, Munich, Germany
| | - Giovanni E Cacciamani
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | | | - Thilo Westhofen
- Department of Urology, LMU University Hospital, Munich, Germany
| |
Collapse
|
19
|
Anudjo MNK, Vitale C, Elshami W, Hancock A, Adeleke S, Franklin JM, Akudjedu TN. Considerations for environmental sustainability in clinical radiology and radiotherapy practice: A systematic literature review and recommendations for a greener practice. Radiography (Lond) 2023; 29:1077-1092. [PMID: 37757675 DOI: 10.1016/j.radi.2023.09.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 09/01/2023] [Accepted: 09/11/2023] [Indexed: 09/29/2023]
Abstract
INTRODUCTION Environmental sustainability (ES) in healthcare is an important current challenge in the wider context of reducing the environmental impacts of human activity. Identifying key routes to making clinical radiology and radiotherapy (CRR) practice more environmentally sustainable will provide a framework for delivering greener clinical services. This study sought to explore and integrate current evidence regarding ES in CRR departments, to provide a comprehensive guide for greener practice, education, and research. METHODS A systematic literature search and review of studies of diverse evidence including qualitative, quantitative, and mixed methods approach was completed across six databases. The Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines and the Quality Assessment Tool for Studies with Diverse Designs (QATSDD) was used to assess the included studies. A result-based convergent data synthesis approach was employed to integrate the study findings. RESULTS A total of 162 articles were identified. After applying a predefined exclusion criterion, fourteen articles were eligible. Three themes emerged as potentially important areas of CRR practice that contribute to environmental footprint: energy consumption and data storage practices; usage of clinical consumables and waste management practices; and CRR activities related to staff and patient travel. CONCLUSIONS Key components of CRR practice that influence environmental impact were identified, which could serve as a framework for exploring greener practice interventions. Widening the scope of research, education and awareness is imperative to providing a holistic appreciation of the environmental burden of healthcare. IMPLICATIONS FOR PRACTICE Encouraging eco-friendly travelling options, leveraging artificial Intelligence (AI) and CRR specific policies to optimise utilisation of resources such as energy and radiopharmaceuticals are recommended for a greener practice.
Collapse
Affiliation(s)
- M N K Anudjo
- Institute of Medical Imaging & Visualisation, Department of Medical Science & Public Health, Faculty of Health & Social Sciences, Bournemouth University, UK
| | - C Vitale
- Institute of Medical Imaging & Visualisation, Department of Medical Science & Public Health, Faculty of Health & Social Sciences, Bournemouth University, UK; IRCCS San Raffaele Hospital, Milan, Italy
| | - W Elshami
- Department of Medical Diagnostic Imaging, College of Health Sciences, University of Sharjah, United Arab Emirates
| | - A Hancock
- Department of Medical Imaging, University of Exeter, Exeter, UK
| | - S Adeleke
- School of Cancer & Pharmaceutical Sciences, King's College London, Queen Square, London WC1N 3BG, UK; High Dimensional Neurology, Department of Brain Repair and Rehabilitation, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - J M Franklin
- Institute of Medical Imaging & Visualisation, Department of Medical Science & Public Health, Faculty of Health & Social Sciences, Bournemouth University, UK
| | - T N Akudjedu
- Institute of Medical Imaging & Visualisation, Department of Medical Science & Public Health, Faculty of Health & Social Sciences, Bournemouth University, UK.
| |
Collapse
|
20
|
Najafi A, Cazzato RL, Meyer BC, Pereira PL, Alberich A, López A, Ronot M, Fritz J, Maas M, Benson S, Haage P, Gomez Munoz F. CIRSE Position Paper on Artificial Intelligence in Interventional Radiology. Cardiovasc Intervent Radiol 2023; 46:1303-1307. [PMID: 37668690 DOI: 10.1007/s00270-023-03521-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 07/21/2023] [Indexed: 09/06/2023]
Abstract
Artificial intelligence (AI) has made tremendous advances in recent years and will presumably have a major impact in health care. These advancements are expected to affect different aspects of clinical medicine and lead to improvement of delivered care but also optimization of available resources. As a modern specialty that extensively relies on imaging, interventional radiology (IR) is primed to be on the forefront of this development. This is especially relevant since IR is a highly advanced specialty that heavily relies on technology and thus is naturally susceptible to disruption by new technological developments. Disruption always means opportunity and interventionalists must therefore understand AI and be a central part of decision-making when such systems are developed, trained, and implemented. Furthermore, interventional radiologist must not only embrace but lead the change that AI technology will allow. The CIRSE position paper discusses the status quo as well as current developments and challenges.
Collapse
Affiliation(s)
- Arash Najafi
- Department of Radiology and Nuclear Medicine, Institut für Radiologie und Nuklearmedizin, Kantonsspital Winterthur, Brauerstrasse 15, 8401, Winterthur, Switzerland.
| | - Roberto Luigi Cazzato
- Department of Interventional Radiology, University Hospital of Strasbourg, Strasbourg, France
| | - Bernhard C Meyer
- Department of Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
| | - Philippe L Pereira
- Center of Radiology, Minimally Invasive Therapies and Nuclear Medicine, SLK-Kliniken GmbH, Academic Hospital of Ruprecht-Karls-University, Heidelberg, Germany
- APL Prof. Faculty of Eberhards-Karls-University, Tübingen, Germany
- Faculty of Danube Private University, Krems, Austria
| | - Angel Alberich
- Quantitative Imaging Biomarkers in Medicine, Quibim SL, Valencia, Spain
| | - Antonio López
- Medical Informatics and Radiology Department, Hospital Clinic de Barcelona, Barcelona, Spain
| | - Maxime Ronot
- Université Paris Cité, CRI, Paris, France
- Service de Radiologie, Hôpital Beaujon APHP Nord, Clichy, France
| | - Jan Fritz
- Department of Radiology, NYU Grossman School of Medicine, New York, USA
| | - Monique Maas
- Antoni van Leeuwenhoek-Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Sean Benson
- Antoni van Leeuwenhoek-Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Patrick Haage
- Zentrum für Radiologie, HELIOS Universitätsklinikum Wuppertal, Wuppertal, Germany
| | - Fernando Gomez Munoz
- Antoni van Leeuwenhoek-Netherlands Cancer Institute, Amsterdam, The Netherlands
- Hospital Universitari i Politecnic La Fe, Valencia, Spain
| |
Collapse
|
21
|
Mello-Thoms C, Mello CAB. Clinical applications of artificial intelligence in radiology. Br J Radiol 2023; 96:20221031. [PMID: 37099398 PMCID: PMC10546456 DOI: 10.1259/bjr.20221031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 03/28/2023] [Accepted: 03/28/2023] [Indexed: 04/27/2023] Open
Abstract
The rapid growth of medical imaging has placed increasing demands on radiologists. In this scenario, artificial intelligence (AI) has become an attractive partner, one that may complement case interpretation and may aid in various non-interpretive aspects of the work in the radiological clinic. In this review, we discuss interpretative and non-interpretative uses of AI in the clinical practice, as well as report on the barriers to AI's adoption in the clinic. We show that AI currently has a modest to moderate penetration in the clinical practice, with many radiologists still being unconvinced of its value and the return on its investment. Moreover, we discuss the radiologists' liabilities regarding the AI decisions, and explain how we currently do not have regulation to guide the implementation of explainable AI or of self-learning algorithms.
Collapse
Affiliation(s)
| | - Carlos A B Mello
- Centro de Informática, Universidade Federal de Pernambuco, Recife, Brazil
| |
Collapse
|
22
|
Abbasi N, Lacson R, Kapoor N, Licaros A, Guenette JP, Burk KS, Hammer M, Desai S, Eappen S, Saini S, Khorasani R. Development and External Validation of an Artificial Intelligence Model for Identifying Radiology Reports Containing Recommendations for Additional Imaging. AJR Am J Roentgenol 2023; 221:377-385. [PMID: 37073901 DOI: 10.2214/ajr.23.29120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2023]
Abstract
BACKGROUND. Reported rates of recommendations for additional imaging (RAIs) in radiology reports are low. Bidirectional encoder representations from transformers (BERT), a deep learning model pretrained to understand language context and ambiguity, has potential for identifying RAIs and thereby assisting large-scale quality improvement efforts. OBJECTIVE. The purpose of this study was to develop and externally validate an artificial intelligence (AI)-based model for identifying radiology reports containing RAIs. METHODS. This retrospective study was performed at a multisite health center. A total of 6300 radiology reports generated at one site from January 1, 2015, to June 30, 2021, were randomly selected and split by 4:1 ratio to create training (n = 5040) and test (n = 1260) sets. A total of 1260 reports generated at the center's other sites (including academic and community hospitals) from April 1 to April 30, 2022, were randomly selected as an external validation group. Referring practitioners and radiologists of varying sub-specialties manually reviewed report impressions for presence of RAIs. A BERT-based technique for identifying RAIs was developed by use of the training set. Performance of the BERT-based model and a previously developed traditional machine learning (TML) model was assessed in the test set. Finally, performance was assessed in the external validation set. The code for the BERT-based RAI model is publicly available. RESULTS. Among a total of 7419 unique patients (4133 women, 3286 men; mean age, 58.8 years), 10.0% of 7560 reports contained RAI. In the test set, the BERT-based model had 94.4% precision, 98.5% recall, and an F1 score of 96.4%. In the test set, the TML model had 69.0% precision, 65.4% recall, and an F1 score of 67.2%. In the test set, accuracy was greater for the BERT-based than for the TML model (99.2% vs 93.1%, p < .001). In the external validation set, the BERT-based model had 99.2% precision, 91.6% recall, an F1 score of 95.2%, and 99.0% accuracy. CONCLUSION. The BERT-based AI model accurately identified reports with RAIs, outperforming the TML model. High performance in the external validation set suggests the potential for other health systems to adapt the model without requiring institution-specific training. CLINICAL IMPACT. The model could potentially be used for real-time EHR monitoring for RAIs and other improvement initiatives to help ensure timely performance of clinically necessary recommended follow-up.
Collapse
Affiliation(s)
- Nooshin Abbasi
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Ronilda Lacson
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Neena Kapoor
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Andro Licaros
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Jeffrey P Guenette
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Kristine Specht Burk
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Mark Hammer
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Sonali Desai
- Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Sunil Eappen
- Department of Anesthesiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Sanjay Saini
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Ramin Khorasani
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| |
Collapse
|
23
|
Chen Y, Taib AG, Darker IT, James JJ. Performance of a Breast Cancer Detection AI Algorithm Using the Personal Performance in Mammographic Screening Scheme. Radiology 2023; 308:e223299. [PMID: 37668522 DOI: 10.1148/radiol.223299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/06/2023]
Abstract
Background The Personal Performance in Mammographic Screening (PERFORMS) scheme is used to assess reader performance. Whether this scheme can assess the performance of artificial intelligence (AI) algorithms is unknown. Purpose To compare the performance of human readers and a commercially available AI algorithm interpreting PERFORMS test sets. Materials and Methods In this retrospective study, two PERFORMS test sets, each consisting of 60 challenging cases, were evaluated by human readers between May 2018 and March 2021 and were evaluated by an AI algorithm in 2022. AI considered each breast separately, assigning a suspicion of malignancy score to features detected. Performance was assessed using the highest score per breast. Performance metrics, including sensitivity, specificity, and area under the receiver operating characteristic curve (AUC), were calculated for AI and humans. The study was powered to detect a medium-sized effect (odds ratio, 3.5 or 0.29) for sensitivity. Results A total of 552 human readers interpreted both PERFORMS test sets, consisting of 161 normal breasts, 70 malignant breasts, and nine benign breasts. No difference was observed at the breast level between the AUC for AI and the AUC for human readers (0.93% and 0.88%, respectively; P = .15). When using the developer's suggested recall score threshold, no difference was observed for AI versus human reader sensitivity (84% and 90%, respectively; P = .34), but the specificity of AI was higher (89%) than that of the human readers (76%, P = .003). However, it was not possible to demonstrate equivalence due to the size of the test sets. When using recall thresholds to match mean human reader performance (90% sensitivity, 76% specificity), AI showed no differences inperformance, with a sensitivity of 91% (P =. 73) and a specificity of 77% (P = .85). Conclusion Diagnostic performance of AI was comparable with that of the average human reader when evaluating cases from two enriched test sets from the PERFORMS scheme. © RSNA, 2023 See also the editorial by Philpotts in this issue.
Collapse
Affiliation(s)
- Yan Chen
- From the Department of Translational Medical Sciences, School of Medicine, University of Nottingham, Clinical Sciences Building, Nottingham City Hospital, City Hospital Campus, Hucknall Rd, Nottingham NG5 1PB, United Kingdom (Y.C., A.G.T., I.T.D.); and Nottingham Breast Institute, Nottingham University Hospitals NHS Trust, Nottingham, United Kingdom (J.J.J.)
| | - Adnan G Taib
- From the Department of Translational Medical Sciences, School of Medicine, University of Nottingham, Clinical Sciences Building, Nottingham City Hospital, City Hospital Campus, Hucknall Rd, Nottingham NG5 1PB, United Kingdom (Y.C., A.G.T., I.T.D.); and Nottingham Breast Institute, Nottingham University Hospitals NHS Trust, Nottingham, United Kingdom (J.J.J.)
| | - Iain T Darker
- From the Department of Translational Medical Sciences, School of Medicine, University of Nottingham, Clinical Sciences Building, Nottingham City Hospital, City Hospital Campus, Hucknall Rd, Nottingham NG5 1PB, United Kingdom (Y.C., A.G.T., I.T.D.); and Nottingham Breast Institute, Nottingham University Hospitals NHS Trust, Nottingham, United Kingdom (J.J.J.)
| | - Jonathan J James
- From the Department of Translational Medical Sciences, School of Medicine, University of Nottingham, Clinical Sciences Building, Nottingham City Hospital, City Hospital Campus, Hucknall Rd, Nottingham NG5 1PB, United Kingdom (Y.C., A.G.T., I.T.D.); and Nottingham Breast Institute, Nottingham University Hospitals NHS Trust, Nottingham, United Kingdom (J.J.J.)
| |
Collapse
|
24
|
Tripathi S, Gabriel K, Dheer S, Parajuli A, Augustin AI, Elahi A, Awan O, Dako F. Understanding Biases and Disparities in Radiology AI Datasets: A Review. J Am Coll Radiol 2023; 20:836-841. [PMID: 37454752 DOI: 10.1016/j.jacr.2023.06.015] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 06/14/2023] [Indexed: 07/18/2023]
Abstract
Artificial intelligence (AI) continues to show great potential in disease detection and diagnosis on medical imaging with increasingly high accuracy. An important component of AI model creation is dataset development for training, validation, and testing. Diverse and high-quality datasets are critical to ensure robust and unbiased AI models that maintain validity, especially in traditionally underserved populations globally. Yet publicly available datasets demonstrate problems with quality and inclusivity. In this literature review, the authors evaluate publicly available medical imaging datasets for demographic, geographic, genetic, and disease representation or lack thereof and call for an increase emphasis on dataset development to maximize the impact of AI models.
Collapse
Affiliation(s)
- Satvik Tripathi
- Department of Radiology, University of Pennsylvania School of Medicine, Philadelphia, Pennsylvania.
| | - Kyla Gabriel
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts
| | - Suhani Dheer
- Department of Radiology, University of Pennsylvania School of Medicine, Philadelphia, Pennsylvania
| | - Aastha Parajuli
- Department of Radiology, Kathmandu University of School of Medical Sciences, Dhulikhel, Nepal
| | | | - Ameena Elahi
- Department of Information Services, University of Pennsylvania Health System, Philadelphia, Pennsylvania
| | - Omar Awan
- Department of Radiology, University of Maryland School of Medicine, Baltimore, Maryland
| | - Farouk Dako
- Department of Radiology, University of Pennsylvania School of Medicine, Philadelphia, Pennsylvania
| |
Collapse
|
25
|
Najjar R. Redefining Radiology: A Review of Artificial Intelligence Integration in Medical Imaging. Diagnostics (Basel) 2023; 13:2760. [PMID: 37685300 PMCID: PMC10487271 DOI: 10.3390/diagnostics13172760] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 08/01/2023] [Accepted: 08/10/2023] [Indexed: 09/10/2023] Open
Abstract
This comprehensive review unfolds a detailed narrative of Artificial Intelligence (AI) making its foray into radiology, a move that is catalysing transformational shifts in the healthcare landscape. It traces the evolution of radiology, from the initial discovery of X-rays to the application of machine learning and deep learning in modern medical image analysis. The primary focus of this review is to shed light on AI applications in radiology, elucidating their seminal roles in image segmentation, computer-aided diagnosis, predictive analytics, and workflow optimisation. A spotlight is cast on the profound impact of AI on diagnostic processes, personalised medicine, and clinical workflows, with empirical evidence derived from a series of case studies across multiple medical disciplines. However, the integration of AI in radiology is not devoid of challenges. The review ventures into the labyrinth of obstacles that are inherent to AI-driven radiology-data quality, the 'black box' enigma, infrastructural and technical complexities, as well as ethical implications. Peering into the future, the review contends that the road ahead for AI in radiology is paved with promising opportunities. It advocates for continuous research, embracing avant-garde imaging technologies, and fostering robust collaborations between radiologists and AI developers. The conclusion underlines the role of AI as a catalyst for change in radiology, a stance that is firmly rooted in sustained innovation, dynamic partnerships, and a steadfast commitment to ethical responsibility.
Collapse
Affiliation(s)
- Reabal Najjar
- Canberra Health Services, Australian Capital Territory 2605, Australia
| |
Collapse
|
26
|
Ouyang CH, Chen CC, Tee YS, Lin WC, Kuo LW, Liao CA, Cheng CT, Liao CH. The Application of Design Thinking in Developing a Deep Learning Algorithm for Hip Fracture Detection. Bioengineering (Basel) 2023; 10:735. [PMID: 37370666 DOI: 10.3390/bioengineering10060735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 06/05/2023] [Accepted: 06/12/2023] [Indexed: 06/29/2023] Open
Abstract
(1) Background: Design thinking is a problem-solving approach that has been applied in various sectors, including healthcare and medical education. While deep learning (DL) algorithms can assist in clinical practice, integrating them into clinical scenarios can be challenging. This study aimed to use design thinking steps to develop a DL algorithm that accelerates deployment in clinical practice and improves its performance to meet clinical requirements. (2) Methods: We applied the design thinking process to interview clinical doctors and gain insights to develop and modify the DL algorithm to meet clinical scenarios. We also compared the DL performance of the algorithm before and after the integration of design thinking. (3) Results: After empathizing with clinical doctors and defining their needs, we identified the unmet need of five trauma surgeons as "how to reduce the misdiagnosis of femoral fracture by pelvic plain film (PXR) at initial emergency visiting". We collected 4235 PXRs from our hospital, of which 2146 had a hip fracture (51%) from 2008 to 2016. We developed hip fracture DL detection models based on the Xception convolutional neural network by using these images. By incorporating design thinking, we improved the diagnostic accuracy from 0.91 (0.84-0.96) to 0.95 (0.93-0.97), the sensitivity from 0.97 (0.89-1.00) to 0.97 (0.94-0.99), and the specificity from 0.84 (0.71-0.93) to 0.93(0.990-0.97). (4) Conclusions: In summary, this study demonstrates that design thinking can ensure that DL solutions developed for trauma care are user-centered and meet the needs of patients and healthcare providers.
Collapse
Affiliation(s)
- Chun-Hsiang Ouyang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Chih-Chi Chen
- Department of Rehabilitation and Physical Medicine, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Yu-San Tee
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Wei-Cheng Lin
- Department of Electrical Engineering, Chang Gung University, Taoyuan 33327, Taiwan
| | - Ling-Wei Kuo
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Chien-An Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Chang Gung University, Linkou, Taoyuan 33328, Taiwan
| |
Collapse
|
27
|
Kim C, Gadgil SU, DeGrave AJ, Cai ZR, Daneshjou R, Lee SI. Fostering transparent medical image AI via an image-text foundation model grounded in medical literature. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.06.07.23291119. [PMID: 37398017 PMCID: PMC10312868 DOI: 10.1101/2023.06.07.23291119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Building trustworthy and transparent image-based medical AI systems requires the ability to interrogate data and models at all stages of the development pipeline: from training models to post-deployment monitoring. Ideally, the data and associated AI systems could be described using terms already familiar to physicians, but this requires medical datasets densely annotated with semantically meaningful concepts. Here, we present a foundation model approach, named MONET (Medical cONcept rETriever), which learns how to connect medical images with text and generates dense concept annotations to enable tasks in AI transparency from model auditing to model interpretation. Dermatology provides a demanding use case for the versatility of MONET, due to the heterogeneity in diseases, skin tones, and imaging modalities. We trained MONET on the basis of 105,550 dermatological images paired with natural language descriptions from a large collection of medical literature. MONET can accurately annotate concepts across dermatology images as verified by board-certified dermatologists, outperforming supervised models built on previously concept-annotated dermatology datasets. We demonstrate how MONET enables AI transparency across the entire AI development pipeline from dataset auditing to model auditing to building inherently interpretable models.
Collapse
Affiliation(s)
- Chanwoo Kim
- Paul G. Allen School of Computer Science and Engineering, University of Washington
| | - Soham U Gadgil
- Paul G. Allen School of Computer Science and Engineering, University of Washington
| | - Alex J DeGrave
- Paul G. Allen School of Computer Science and Engineering, University of Washington
- Medical Scientist Training Program, University of Washington
| | - Zhuo Ran Cai
- Program for Clinical Research and Technology, Stanford University
| | - Roxana Daneshjou
- Department of Dermatology, Stanford School of Medicine
- Department of Biomedical Data Science, Stanford School of Medicine
| | - Su-In Lee
- Paul G. Allen School of Computer Science and Engineering, University of Washington
| |
Collapse
|
28
|
Herpe G, Feydy A, D'Assignies G. Efficacy versus Effectiveness in Clinical Evaluation of Artificial Intelligence Algorithms for Medical Diagnosis: The Award Goes to Effectiveness. Radiology 2023; 307:e223132. [PMID: 37158721 DOI: 10.1148/radiol.223132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Affiliation(s)
- Guillaume Herpe
- Department of Radiology, University Hospital of Poitiers, 2 rue de la Milétrie, 86021 Poitiers, France
- Dactim Mis, Poitiers, France
- Incepto Medical, Paris, France
| | - Antoine Feydy
- Department of Radiology, Cochin Hospital, Assistance Publique des Hopitaux de Paris, Paris, France
| | - Gaspard D'Assignies
- Incepto Medical, Paris, France
- Department of Radiology, Le Havre Hospital, Le Havre, France
| |
Collapse
|
29
|
Gorenstein L, Soffer S, Apter S, Konen E, Klang E. AI in radiology: is it the time for randomized controlled trials? Eur Radiol 2023; 33:4223-4225. [PMID: 36597003 DOI: 10.1007/s00330-022-09381-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 11/17/2022] [Accepted: 11/29/2022] [Indexed: 01/05/2023]
Affiliation(s)
- Larisa Gorenstein
- Department of Diagnostic Imaging, Sheba Medical Center, 2 Sheba Rd, 52621, Ramat Gan, Israel.
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel.
| | - Shelly Soffer
- Internal Medicine B, Assuta Medical Center, Ashdod, Israel
- Ben-Gurion University of the Negev, Be'er Sheva, Israel
| | - Sara Apter
- Department of Diagnostic Imaging, Sheba Medical Center, 2 Sheba Rd, 52621, Ramat Gan, Israel
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| | - Eli Konen
- Department of Diagnostic Imaging, Sheba Medical Center, 2 Sheba Rd, 52621, Ramat Gan, Israel
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| | - Eyal Klang
- Department of Diagnostic Imaging, Sheba Medical Center, 2 Sheba Rd, 52621, Ramat Gan, Israel
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| |
Collapse
|
30
|
Rajpurkar P, Lungren MP. The Current and Future State of AI Interpretation of Medical Images. N Engl J Med 2023; 388:1981-1990. [PMID: 37224199 DOI: 10.1056/nejmra2301725] [Citation(s) in RCA: 72] [Impact Index Per Article: 72.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Affiliation(s)
- Pranav Rajpurkar
- From the Department of Biomedical Informatics, Harvard Medical School, Boston (P.R.); the Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Stanford, and the Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco - both in California (M.P.L.); and Microsoft, Redmond, Washington (M.P.L.)
| | - Matthew P Lungren
- From the Department of Biomedical Informatics, Harvard Medical School, Boston (P.R.); the Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Stanford, and the Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco - both in California (M.P.L.); and Microsoft, Redmond, Washington (M.P.L.)
| |
Collapse
|
31
|
Pham N, Hill V, Rauschecker A, Lui Y, Niogi S, Fillipi CG, Chang P, Zaharchuk G, Wintermark M. Critical Appraisal of Artificial Intelligence-Enabled Imaging Tools Using the Levels of Evidence System. AJNR Am J Neuroradiol 2023; 44:E21-E28. [PMID: 37080722 PMCID: PMC10171388 DOI: 10.3174/ajnr.a7850] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 03/16/2023] [Indexed: 04/22/2023]
Abstract
Clinical adoption of an artificial intelligence-enabled imaging tool requires critical appraisal of its life cycle from development to implementation by using a systematic, standardized, and objective approach that can verify both its technical and clinical efficacy. Toward this concerted effort, the ASFNR/ASNR Artificial Intelligence Workshop Technology Working Group is proposing a hierarchal evaluation system based on the quality, type, and amount of scientific evidence that the artificial intelligence-enabled tool can demonstrate for each component of its life cycle. The current proposal is modeled after the levels of evidence in medicine, with the uppermost level of the hierarchy showing the strongest evidence for potential impact on patient care and health care outcomes. The intended goal of establishing an evidence-based evaluation system is to encourage transparency, foster an understanding of the creation of artificial intelligence tools and the artificial intelligence decision-making process, and to report the relevant data on the efficacy of artificial intelligence tools that are developed. The proposed system is an essential step in working toward a more formalized, clinically validated, and regulated framework for the safe and effective deployment of artificial intelligence imaging applications that will be used in clinical practice.
Collapse
Affiliation(s)
- N Pham
- From the Department of Radiology (N.P., G.Z.), Stanford School of Medicine, Palo Alto, California
| | - V Hill
- Department of Radiology (V.H.), Northwestern University Feinberg School of Medicine, Chicago, Illinois
| | - A Rauschecker
- Department of Radiology (A.R.), University of California, San Francisco, San Francisco, California
| | - Y Lui
- Department of Radiology (Y.L.), NYU Grossman School of Medicine, New York, New York
| | - S Niogi
- Department of Radiology (S.N.), Weill Cornell Medicine, New York, New York
| | - C G Fillipi
- Department of Radiology (C.G.F.), Tufts University School of Medicine, Boston, Massachusetts
| | - P Chang
- Department of Radiology (P.C.), University of California, Irvine, Irvine, California
| | - G Zaharchuk
- From the Department of Radiology (N.P., G.Z.), Stanford School of Medicine, Palo Alto, California
| | - M Wintermark
- Department of Neuroradiology (M.W.), The University of Texas MD Anderson Cancer Center, Houston, Texas
| |
Collapse
|
32
|
Mese I. The imperative of a radiology AI deployment registry and the potential of ChatGPT. Clin Radiol 2023:S0009-9260(23)00140-X. [PMID: 37117047 DOI: 10.1016/j.crad.2023.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 04/05/2023] [Indexed: 04/30/2023]
Affiliation(s)
- I Mese
- Health Sciences University, Erenkoy Mental Health and Neurology Training and Research Hospital, Istanbul, Turkey.
| |
Collapse
|
33
|
Silkens MEWM, Ross J, Hall M, Scarbrough H, Rockall A. The time is now: making the case for a UK registry of deployment of radiology artificial intelligence applications. Clin Radiol 2023; 78:107-114. [PMID: 36639171 DOI: 10.1016/j.crad.2022.09.132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 09/02/2022] [Accepted: 09/06/2022] [Indexed: 01/12/2023]
Abstract
Artificial intelligence (AI)-based healthcare applications (apps) are rapidly evolving, and radiology is a target specialty for their implementation. In this paper, we put the case for a national deployment registry to track the spread of AI apps into clinical use in radiology in the UK. By gathering data on the specific locations, purposes, and people associated with AI app deployment, such a registry would provide greater transparency on their spread in the radiology field. In combination with other regulatory and audit mechanisms, it would provide radiologists and patients with greater confidence and trust in AI apps. At the same time, coordination of this information would reduce costs for the National Health Service (NHS) by preventing duplication of piloting activities. This commentary discusses the need for a UK-wide registry for such apps, its benefits and risks, and critical success factors for its establishment. We conclude by noting that a critical window of opportunity has opened up for the development of a deployment registry, before the current pattern of localised clusters of activity turns into the widespread proliferation of AI apps across clinical practice.
Collapse
Affiliation(s)
- M E W M Silkens
- Centre for Healthcare Innovation Research, City University of London, London, UK.
| | - J Ross
- Department of Cancer and Surgery, Imperial College London, London, UK
| | - M Hall
- Queen Elizabeth University Hospital, Glasgow, UK
| | - H Scarbrough
- Centre for Healthcare Innovation Research, City University of London, London, UK
| | - A Rockall
- Department of Cancer and Surgery, Imperial College London, London, UK
| |
Collapse
|
34
|
Brink JA, Hricak H. Radiology 2040. Radiology 2023; 306:69-72. [PMID: 36534608 PMCID: PMC9792708 DOI: 10.1148/radiol.222594] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 10/20/2022] [Accepted: 10/25/2022] [Indexed: 12/24/2022]
Abstract
A translation of this article in Spanish is available in the supplement. Una traducción de este artículo en español está disponible en el suplemento.
Collapse
Affiliation(s)
- James A. Brink
- From the Department of Radiology, Massachusetts General Hospital,
Brigham and Women’s Hospital, Boston, Mass (J.A.B.); and Department of
Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Ave, Ste H-704, New
York, NY 10065 (H.H.)
| | - Hedvig Hricak
- From the Department of Radiology, Massachusetts General Hospital,
Brigham and Women’s Hospital, Boston, Mass (J.A.B.); and Department of
Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Ave, Ste H-704, New
York, NY 10065 (H.H.)
| |
Collapse
|
35
|
Graziani M, Dutkiewicz L, Calvaresi D, Amorim JP, Yordanova K, Vered M, Nair R, Abreu PH, Blanke T, Pulignano V, Prior JO, Lauwaert L, Reijers W, Depeursinge A, Andrearczyk V, Müller H. A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences. Artif Intell Rev 2023; 56:3473-3504. [PMID: 36092822 PMCID: PMC9446618 DOI: 10.1007/s10462-022-10256-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Since its emergence in the 1960s, Artificial Intelligence (AI) has grown to conquer many technology products and their fields of application. Machine learning, as a major part of the current AI solutions, can learn from the data and through experience to reach high performance on various tasks. This growing success of AI algorithms has led to a need for interpretability to understand opaque models such as deep neural networks. Various requirements have been raised from different domains, together with numerous tools to debug, justify outcomes, and establish the safety, fairness and reliability of the models. This variety of tasks has led to inconsistencies in the terminology with, for instance, terms such as interpretable, explainable and transparent being often used interchangeably in methodology papers. These words, however, convey different meanings and are "weighted" differently across domains, for example in the technical and social sciences. In this paper, we propose an overarching terminology of interpretability of AI systems that can be referred to by the technical developers as much as by the social sciences community to pursue clarity and efficiency in the definition of regulations for ethical and reliable AI development. We show how our taxonomy and definition of interpretable AI differ from the ones in previous research and how they apply with high versatility to several domains and use cases, proposing a-highly needed-standard for the communication among interdisciplinary areas of AI.
Collapse
Affiliation(s)
- Mara Graziani
- University of Applied Sciences of Western Switzerland (HES-SO Valais), Rue du Technopole 3, Sierre, 3960 Valais Switzerland.,Department of Computer Science, University of Geneva (UniGe), Route de Drize 7, Carouge, 1227 Vaud Switzerland
| | - Lidia Dutkiewicz
- Centre for IT and IP Law, KU Leuven, Sint-Michielsstraat 6, Leuven, 3000 Belgium
| | - Davide Calvaresi
- University of Applied Sciences of Western Switzerland (HES-SO Valais), Rue du Technopole 3, Sierre, 3960 Valais Switzerland
| | - José Pereira Amorim
- CISUC, Department of Informatics Engineering, University of Coimbra, Pólo II, Pinhal de Marrocos, Coimbra, 3030-790 Portugal.,IPO-Porto Research Centre, Rua Dr. António Bernardino de Almeida, Porto, 4200-072 Portugal
| | - Katerina Yordanova
- Centre for IT and IP Law, KU Leuven, Sint-Michielsstraat 6, Leuven, 3000 Belgium
| | - Mor Vered
- Department of Data Science and AI, Monash University, Wellington Rd, Clayton VIC, Melbourne, 3800 Australia
| | - Rahul Nair
- IBM Research Europe, 3 Technology Campus, Dublin, D15 HN66 Ireland
| | - Pedro Henriques Abreu
- CISUC, Department of Informatics Engineering, University of Coimbra, Pólo II, Pinhal de Marrocos, Coimbra, 3030-790 Portugal
| | - Tobias Blanke
- Institute of Logic, Language and Computation, University of Amsterdam, Spui 21, Amsterdam, 1012WX Netherlands
| | - Valeria Pulignano
- Faculty of Social Science, Centre for Sociological Research, Parkstraat 45 bus, Leuven, 3000 Belgium
| | - John O Prior
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital, Rue du Bugnon 46, Lausanne, 1011 Vaud Switzerland
| | - Lode Lauwaert
- Institute of Philosophy, KU Leuven, Kardinaal Mercierplein 2, bus 3200, Leuven, 3000 Belgium
| | - Wessel Reijers
- Robert Schuman Centre, European University Institute, Via Boccaccio 121, Florence, 50133 Italy
| | - Adrien Depeursinge
- University of Applied Sciences of Western Switzerland (HES-SO Valais), Rue du Technopole 3, Sierre, 3960 Valais Switzerland.,Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital, Rue du Bugnon 46, Lausanne, 1011 Vaud Switzerland
| | - Vincent Andrearczyk
- University of Applied Sciences of Western Switzerland (HES-SO Valais), Rue du Technopole 3, Sierre, 3960 Valais Switzerland
| | - Henning Müller
- University of Applied Sciences of Western Switzerland (HES-SO Valais), Rue du Technopole 3, Sierre, 3960 Valais Switzerland.,Department of Radiology and Medical Informatics, University of Geneva (UniGe), Rue Gabrielle-Perret-Gentil 4, Geneva, 1211 Vaud Switzerland
| |
Collapse
|
36
|
Santos GNM, da Silva HEC, Figueiredo PTDS, Mesquita CRM, Melo NS, Stefani CM, Leite AF. The Introduction of Artificial Intelligence in Diagnostic Radiology Curricula: a Text and Opinion Systematic Review. INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION 2022. [DOI: 10.1007/s40593-022-00324-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
37
|
Motwani A, Shukla PK, Pawar M. Ubiquitous and smart healthcare monitoring frameworks based on machine learning: A comprehensive review. Artif Intell Med 2022; 134:102431. [PMID: 36462891 PMCID: PMC9595483 DOI: 10.1016/j.artmed.2022.102431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 10/11/2022] [Accepted: 10/19/2022] [Indexed: 02/04/2023]
Abstract
During the COVID-19 pandemic, the patient care delivery paradigm rapidly shifted to remote technological solutions. Rising rates of life expectancy of older people, and deaths due to chronic diseases (CDs) such as cancer, diabetes and respiratory disease pose many challenges to healthcare. While the feasibility of Remote Patient Monitoring (RPM) with a Smart Healthcare Monitoring (SHM) framework was somewhat questionable before the COVID-19 pandemic, it is now a proven commodity and is on its way to becoming ubiquitous. More health organizations are adopting RPM to enable CD management in the absence of individual monitoring. The current studies on SHM have reviewed the applications of IoT and/or Machine Learning (ML) in the domain, their architecture, security, privacy and other network related issues. However, no study has analyzed the AI and ubiquitous computing advances in SHM frameworks. The objective of this research is to identify and map key technical concepts in the SHM framework. In this context an interesting and meaningful classification of the research articles surveyed for this work is presented. The comprehensive and systematic review is based on the "Preferred Reporting Items for Systematic Review and Meta-Analysis" (PRISMA) approach. A total of 2540 papers were screened from leading research archives from 2016 to March 2021, and finally, 50 articles were selected for review. The major advantages, developments, distinctive architectural structure, components, technical challenges and possibilities in SHM are briefly discussed. A review of various recent cloud and fog computing based architectures, major ML implementation challenges, prospects and future trends is also presented. The survey primarily encourages the data driven predictive analytics aspects of healthcare and the development of ML models for health empowerment.
Collapse
Affiliation(s)
- Anand Motwani
- School of Computing Science & Engineering, VIT Bhopal University, Sehore, (MP) 466114, India; Department of Computer Science & Engineering, University Institute of Technology, RGPV, Bhopal, (MP) 462033, India.
| | - Piyush Kumar Shukla
- Department of Computer Science & Engineering, University Institute of Technology, RGPV, Bhopal, (MP) 462033, India.
| | - Mahesh Pawar
- Department of Information Technology, University Institute of Technology, RGPV, Bhopal, (MP) 462033, India.
| |
Collapse
|
38
|
Brink L, Coombs LP, Kattil Veettil D, Kuchipudi K, Marella S, Schmidt K, Nair SS, Tilkin M, Treml C, Chang K, Kalpathy-Cramer J. ACR’s Connect and AI-LAB technical framework. JAMIA Open 2022; 5:ooac094. [PMID: 36380846 PMCID: PMC9651971 DOI: 10.1093/jamiaopen/ooac094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 10/11/2022] [Accepted: 10/31/2022] [Indexed: 11/13/2022] Open
Abstract
Objective To develop a free, vendor-neutral software suite, the American College of Radiology (ACR) Connect, which serves as a platform for democratizing artificial intelligence (AI) for all individuals and institutions. Materials and Methods Among its core capabilities, ACR Connect provides educational resources; tools for dataset annotation; model building and evaluation; and an interface for collaboration and federated learning across institutions without the need to move data off hospital premises. Results The AI-LAB application within ACR Connect allows users to investigate AI models using their own local data while maintaining data security. The software enables non-technical users to participate in the evaluation and training of AI models as part of a larger, collaborative network. Discussion Advancements in AI have transformed automated quantitative analysis for medical imaging. Despite the significant progress in research, AI is currently underutilized in current clinical workflows. The success of AI model development depends critically on the synergy between physicians who can drive clinical direction, data scientists who can design effective algorithms, and the availability of high-quality datasets. ACR Connect and AI-LAB provide a way to perform external validation as well as collaborative, distributed training. Conclusion In order to create a collaborative AI ecosystem across clinical and technical domains, the ACR developed a platform that enables non-technical users to participate in education and model development.
Collapse
Affiliation(s)
- Laura Brink
- Department of Information Technology, American College of Radiology , Reston, Virginia, USA
| | - Laura P Coombs
- Department of Information Technology, American College of Radiology , Reston, Virginia, USA
| | - Deepak Kattil Veettil
- Department of Information Technology, American College of Radiology , Reston, Virginia, USA
| | - Kashyap Kuchipudi
- Department of Information Technology, American College of Radiology , Reston, Virginia, USA
| | - Sailaja Marella
- Department of Information Technology, American College of Radiology , Reston, Virginia, USA
| | - Kendall Schmidt
- Department of Information Technology, American College of Radiology , Reston, Virginia, USA
| | - Sujith Surendran Nair
- Department of Information Technology, American College of Radiology , Reston, Virginia, USA
| | - Michael Tilkin
- Department of Information Technology, American College of Radiology , Reston, Virginia, USA
| | - Christopher Treml
- Department of Information Technology, American College of Radiology , Reston, Virginia, USA
| | - Ken Chang
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital , Boston, Massachusetts, USA
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital , Boston, Massachusetts, USA
- Department of Ophthalmology, University of Colorado School of Medicine , Aurora, Colorado, USA
| |
Collapse
|
39
|
White RD, Demirer M, Gupta V, Sebro RA, Kusumoto FM, Erdal BS. Pre-deployment assessment of an AI model to assist radiologists in chest X-ray detection and identification of lead-less implanted electronic devices for pre-MRI safety screening: realized implementation needs and proposed operational solutions. J Med Imaging (Bellingham) 2022; 9:054504. [PMID: 36310648 PMCID: PMC9603740 DOI: 10.1117/1.jmi.9.5.054504] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 09/23/2022] [Indexed: 09/29/2023] Open
Abstract
Purpose Chest X-ray (CXR) use in pre-MRI safety screening, such as for lead-less implanted electronic device (LLIED) recognition, is common. To assist CXR interpretation, we "pre-deployed" an artificial intelligence (AI) model to assess (1) accuracies in LLIED-type (and consequently safety-level) identification, (2) safety implications of LLIED nondetections or misidentifications, (3) infrastructural or workflow requirements, and (4) demands related to model adaptation to real-world conditions. Approach A two-tier cascading methodology for LLIED detection/localization and identification on a frontal CXR was applied to evaluate the performance of the original nine-class AI model. With the unexpected early appearance of LLIED types during simulated real-world trialing, retraining of a newer 12-class version preceded retrialing. A zero footprint (ZF) graphical user interface (GUI)/viewer with DICOM-based output was developed for inference-result display and adjudication, supporting end-user engagement and model continuous learning and/or modernization. Results During model testing or trialing using both the nine-class and 12-class models, robust detection/localization was consistently 100%, with mAP 0.99 from fivefold cross-validation. Safety-level categorization was high during both testing ( AUC ≥ 0.98 and ≥ 0.99 , respectively) and trialing (accuracy 98% and 97%, respectively). LLIED-type identifications by the two models during testing (1) were 98.9% and 99.5% overall correct and (2) consistently showed AUC ≥ 0.92 (1.00 for 8/9 and 9/12 LLIED-types, respectively). Pre-deployment trialing of both models demonstrated overall type-identification accuracies of 94.5% and 95%, respectively. Of the small number of misidentifications, none involved MRI-stringently conditional or MRI-unsafe types of LLIEDs. Optimized ZF GUI/viewer operations led to greater user-friendliness for radiologist engagement. Conclusions Our LLIED-related AI methodology supports (1) 100% detection sensitivity, (2) high identification (including MRI-safety) accuracy, and (3) future model deployment with facilitated inference-result display and adjudication for ongoing model adaptation to future real-world experiences.
Collapse
Affiliation(s)
- Richard D. White
- Mayo Clinic, Department of Radiology, Center for Augmented Intelligence in Imaging, Jacksonville, Florida, United States
| | - Mutlu Demirer
- Mayo Clinic, Department of Radiology, Center for Augmented Intelligence in Imaging, Jacksonville, Florida, United States
| | - Vikash Gupta
- Mayo Clinic, Department of Radiology, Center for Augmented Intelligence in Imaging, Jacksonville, Florida, United States
| | - Ronnie A. Sebro
- Mayo Clinic, Department of Radiology, Center for Augmented Intelligence in Imaging, Jacksonville, Florida, United States
| | - Frederick M. Kusumoto
- Mayo Clinic, Department of Cardiovascular Medicine, Jacksonville, Florida, United States
| | - Barbaros Selnur Erdal
- Mayo Clinic, Department of Radiology, Center for Augmented Intelligence in Imaging, Jacksonville, Florida, United States
| |
Collapse
|
40
|
Hustinx R, Pruim J, Lassmann M, Visvikis D. An EANM position paper on the application of artificial intelligence in nuclear medicine. Eur J Nucl Med Mol Imaging 2022; 50:61-66. [PMID: 36006443 DOI: 10.1007/s00259-022-05947-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 08/16/2022] [Indexed: 11/04/2022]
Abstract
Artificial intelligence (AI) is coming into the field of nuclear medicine, and it is likely here to stay. As a society, EANM can and must play a central role in the use of AI in nuclear medicine. In this position paper, the EANM explains the preconditions for the implementation of AI in NM and takes position.
Collapse
Affiliation(s)
- Roland Hustinx
- Division of Nuclear Medicine and Oncological Imaging, University Hospital of Liège & GIGA-CRC in vivo Imaging, University of Liège, Liège, Belgium
| | - Jan Pruim
- Medical Imaging Center, Dept. of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| | - Michael Lassmann
- Department of Nuclear Medicine, University Hospital Würzburg, Würzburg, Germany
| | | |
Collapse
|
41
|
Machine Learning Model Drift: Predicting Diagnostic Imaging Follow-Up as a Case Example. J Am Coll Radiol 2022; 19:1162-1169. [PMID: 35981636 DOI: 10.1016/j.jacr.2022.05.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 05/19/2022] [Accepted: 05/20/2022] [Indexed: 11/23/2022]
Abstract
OBJECTIVE Address model drift in a machine learning (ML) model for predicting diagnostic imaging follow-up using data augmentation with more recent data versus retraining new predictive models. METHODS This institutional review board-approved retrospective study was conducted January 1, 2016, to December 31, 2020, at a large academic institution. A previously trained ML model was trained on 1,000 radiology reports from 2016 (old data). An additional 1,385 randomly selected reports from 2019 to 2020 (new data) were annotated for follow-up recommendations and randomly divided into two sets: training (n = 900) and testing (n = 485). Support vector machine and random forest (RF) algorithms were constructed and trained using 900 new data reports plus old data (augmented data, new models) and using only new data (new data, new models). The 2016 baseline model was used as comparator as is and trained with augmented data. Recall was compared with baseline using McNemar's test. RESULTS Follow-up recommendations were contained in 11.3% of reports (157 or 1,385). The baseline model retrained with new data had precision = 0.83 and recall = 0.54; none significantly different from baseline. A new RF model trained with augmented data had significantly better recall versus the baseline model (0.80 versus 0.66, P = .04) and comparable precision (0.90 versus 0.86). DISCUSSION ML methods for monitoring follow-up recommendations in radiology reports suffer model drift over time. A newly developed RF model achieved better recall with comparable precision versus simply retraining a previously trained original model with augmented data. Thus, regularly assessing and updating these models is necessary using more recent historical data.
Collapse
|
42
|
Proceedings from the Society of Interventional Radiology Foundation Research Consensus Panel on Artificial Intelligence in Interventional Radiology: From Code to Bedside. J Vasc Interv Radiol 2022; 33:1113-1120. [PMID: 35871021 DOI: 10.1016/j.jvir.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 06/02/2022] [Accepted: 06/04/2022] [Indexed: 11/24/2022] Open
Abstract
Artificial intelligence (AI)-based technologies are the most rapidly growing field of innovation in healthcare with the promise to achieve substantial improvements in delivery of patient care across all disciplines of medicine. Recent advances in imaging technology along with marked expansion of readily available advanced health information, data offer a unique opportunity for interventional radiology (IR) to reinvent itself as a data-driven specialty. Additionally, the growth of AI-based applications in diagnostic imaging is expected to have downstream effects on all image-guidance modalities. Therefore, the Society of Interventional Radiology Foundation has called upon 13 key opinion leaders in the field of IR to develop research priorities for clinical applications of AI in IR. The objectives of the assembled research consensus panel were to assess the availability and understand the applicability of AI for IR, estimate current needs and clinical use cases, and assemble a list of research priorities for the development of AI in IR. Individual panel members proposed and all participants voted upon consensus statements to rank them according to their overall impact for IR. The results identified the top priorities for the IR research community and provide organizing principles for innovative academic-industrial research collaborations that will leverage both clinical expertise and cutting-edge technology to benefit patient care in IR.
Collapse
|
43
|
Vela D, Sharp A, Zhang R, Nguyen T, Hoang A, Pianykh OS. Temporal quality degradation in AI models. Sci Rep 2022; 12:11654. [PMID: 35803963 PMCID: PMC9270447 DOI: 10.1038/s41598-022-15245-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 06/21/2022] [Indexed: 11/22/2022] Open
Abstract
As AI models continue to advance into many real-life applications, their ability to maintain reliable quality over time becomes increasingly important. The principal challenge in this task stems from the very nature of current machine learning models, dependent on the data as it was at the time of training. In this study, we present the first analysis of AI “aging”: the complex, multifaceted phenomenon of AI model quality degradation as more time passes since the last model training cycle. Using datasets from four different industries (healthcare operations, transportation, finance, and weather) and four standard machine learning models, we identify and describe the main temporal degradation patterns. We also demonstrate the principal differences between temporal model degradation and related concepts that have been explored previously, such as data concept drift and continuous learning. Finally, we indicate potential causes of temporal degradation, and suggest approaches to detecting aging and reducing its impact.
Collapse
Affiliation(s)
- Daniel Vela
- Monterrey Institute of Technology and Higher Education, Monterrey, Mexico
| | | | - Richard Zhang
- Massachusetts Institute of Technology, Cambridge, USA
| | | | - An Hoang
- Whitehead Institute for Biomedical Research, Cambridge, USA
| | | |
Collapse
|
44
|
Garcia Santa Cruz B, Slter J, Gomez-Giro G, Saraiva C, Sabate-Soler S, Modamio J, Barmpa K, Schwamborn JC, Hertel F, Jarazo J, Husch A. Generalising from conventional pipelines using deep learning in high-throughput screening workflows. Sci Rep 2022; 12:11465. [PMID: 35794231 PMCID: PMC9259641 DOI: 10.1038/s41598-022-15623-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 06/27/2022] [Indexed: 11/09/2022] Open
Abstract
The study of complex diseases relies on large amounts of data to build models toward precision medicine. Such data acquisition is feasible in the context of high-throughput screening, in which the quality of the results relies on the accuracy of the image analysis. Although state-of-the-art solutions for image segmentation employ deep learning approaches, the high cost of manually generating ground truth labels for model training hampers the day-to-day application in experimental laboratories. Alternatively, traditional computer vision-based solutions do not need expensive labels for their implementation. Our work combines both approaches by training a deep learning network using weak training labels automatically generated with conventional computer vision methods. Our network surpasses the conventional segmentation quality by generalising beyond noisy labels, providing a 25% increase of mean intersection over union, and simultaneously reducing the development and inference times. Our solution was embedded into an easy-to-use graphical user interface that allows researchers to assess the predictions and correct potential inaccuracies with minimal human input. To demonstrate the feasibility of training a deep learning solution on a large dataset of noisy labels automatically generated by a conventional pipeline, we compared our solution against the common approach of training a model from a small manually curated dataset by several experts. Our work suggests that humans perform better in context interpretation, such as error assessment, while computers outperform in pixel-by-pixel fine segmentation. Such pipelines are illustrated with a case study on image segmentation for autophagy events. This work aims for better translation of new technologies to real-world settings in microscopy-image analysis.
Collapse
Affiliation(s)
- Beatriz Garcia Santa Cruz
- National Department of Neurosurgery, Centre Hospitalier de Luxembourg, 4, Rue Ernest Barble, 1210, Luxembourg (City), Luxembourg. .,Interventional Neuroscience Group, Luxembourg Center for Systems Biomedicine, University of Luxembourg, 6, Avenue du Swing, 4367, Belvaux, Luxembourg.
| | - Jan Slter
- Interventional Neuroscience Group, Luxembourg Center for Systems Biomedicine, University of Luxembourg, 6, Avenue du Swing, 4367, Belvaux, Luxembourg
| | - Gemma Gomez-Giro
- Developmental and Cellular Biology, Luxembourg Center for Systems Biomedicine, University of Luxembourg, 6, Avenue du Swing, 4367, Belvaux, Luxembourg
| | - Claudia Saraiva
- Developmental and Cellular Biology, Luxembourg Center for Systems Biomedicine, University of Luxembourg, 6, Avenue du Swing, 4367, Belvaux, Luxembourg
| | - Sonia Sabate-Soler
- Developmental and Cellular Biology, Luxembourg Center for Systems Biomedicine, University of Luxembourg, 6, Avenue du Swing, 4367, Belvaux, Luxembourg
| | - Jennifer Modamio
- Developmental and Cellular Biology, Luxembourg Center for Systems Biomedicine, University of Luxembourg, 6, Avenue du Swing, 4367, Belvaux, Luxembourg
| | - Kyriaki Barmpa
- Developmental and Cellular Biology, Luxembourg Center for Systems Biomedicine, University of Luxembourg, 6, Avenue du Swing, 4367, Belvaux, Luxembourg
| | - Jens Christian Schwamborn
- Developmental and Cellular Biology, Luxembourg Center for Systems Biomedicine, University of Luxembourg, 6, Avenue du Swing, 4367, Belvaux, Luxembourg
| | - Frank Hertel
- National Department of Neurosurgery, Centre Hospitalier de Luxembourg, 4, Rue Ernest Barble, 1210, Luxembourg (City), Luxembourg.,Interventional Neuroscience Group, Luxembourg Center for Systems Biomedicine, University of Luxembourg, 6, Avenue du Swing, 4367, Belvaux, Luxembourg
| | - Javier Jarazo
- Developmental and Cellular Biology, Luxembourg Center for Systems Biomedicine, University of Luxembourg, 6, Avenue du Swing, 4367, Belvaux, Luxembourg.,OrganoTherapeutics SARL, 6A, avenue des Hauts-Fourneaux, 4365, Esch-sur-Alzette, Luxembourg
| | - Andreas Husch
- Interventional Neuroscience Group, Luxembourg Center for Systems Biomedicine, University of Luxembourg, 6, Avenue du Swing, 4367, Belvaux, Luxembourg. .,Systems Control Group, Luxembourg Centere for Systems Biomedicine, University of Luxembourg, 6, Avenue du Swing, 4367, Belvaux, Luxembourg.
| |
Collapse
|
45
|
Multi-center validation of an artificial intelligence system for detection of COVID-19 on chest radiographs in symptomatic patients. Eur Radiol 2022; 33:23-33. [PMID: 35779089 DOI: 10.1007/s00330-022-08969-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 05/18/2022] [Accepted: 06/18/2022] [Indexed: 11/04/2022]
Abstract
OBJECTIVES While chest radiograph (CXR) is the first-line imaging investigation in patients with respiratory symptoms, differentiating COVID-19 from other respiratory infections on CXR remains challenging. We developed and validated an AI system for COVID-19 detection on presenting CXR. METHODS A deep learning model (RadGenX), trained on 168,850 CXRs, was validated on a large international test set of presenting CXRs of symptomatic patients from 9 study sites (US, Italy, and Hong Kong SAR) and 2 public datasets from the US and Europe. Performance was measured by area under the receiver operator characteristic curve (AUC). Bootstrapped simulations were performed to assess performance across a range of potential COVID-19 disease prevalence values (3.33 to 33.3%). Comparison against international radiologists was performed on an independent test set of 852 cases. RESULTS RadGenX achieved an AUC of 0.89 on 4-fold cross-validation and an AUC of 0.79 (95%CI 0.78-0.80) on an independent test cohort of 5,894 patients. Delong's test showed statistical differences in model performance across patients from different regions (p < 0.01), disease severity (p < 0.001), gender (p < 0.001), and age (p = 0.03). Prevalence simulations showed the negative predictive value increases from 86.1% at 33.3% prevalence, to greater than 98.5% at any prevalence below 4.5%. Compared with radiologists, McNemar's test showed the model has higher sensitivity (p < 0.001) but lower specificity (p < 0.001). CONCLUSION An AI model that predicts COVID-19 infection on CXR in symptomatic patients was validated on a large international cohort providing valuable context on testing and performance expectations for AI systems that perform COVID-19 prediction on CXR. KEY POINTS • An AI model developed using CXRs to detect COVID-19 was validated in a large multi-center cohort of 5,894 patients from 9 prospectively recruited sites and 2 public datasets. • Differences in AI model performance were seen across region, disease severity, gender, and age. • Prevalence simulations on the international test set demonstrate the model's NPV is greater than 98.5% at any prevalence below 4.5%.
Collapse
|
46
|
Ardestani A, Li MD, Chea P, Wortman JR, Medina A, Kalpathy-Cramer J, Wald C. External COVID-19 Deep Learning Model Validation on ACR AI-LAB: It's a Brave New World. J Am Coll Radiol 2022; 19:891-900. [PMID: 35483438 PMCID: PMC8989698 DOI: 10.1016/j.jacr.2022.03.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 03/19/2022] [Accepted: 03/21/2022] [Indexed: 11/22/2022]
Abstract
PURPOSE Deploying external artificial intelligence (AI) models locally can be logistically challenging. We aimed to use the ACR AI-LAB software platform for local testing of a chest radiograph (CXR) algorithm for COVID-19 lung disease severity assessment. METHODS An externally developed deep learning model for COVID-19 radiographic lung disease severity assessment was loaded into the AI-LAB platform at an independent academic medical center, which was separate from the institution in which the model was trained. The data set consisted of CXR images from 141 patients with reverse transcription-polymerase chain reaction-confirmed COVID-19, which were routed to AI-LAB for model inference. The model calculated a Pulmonary X-ray Severity (PXS) score for each image. This score was correlated with the average of a radiologist-based assessment of severity, the modified Radiographic Assessment of Lung Edema score, independently interpreted by three radiologists. The associations between the PXS score and patient admission and intubation or death were assessed. RESULTS The PXS score deployed in AI-LAB correlated with the radiologist-determined modified Radiographic Assessment of Lung Edema score (r = 0.80). PXS score was significantly higher in patients who were admitted (4.0 versus 1.3, P < .001) or intubated or died within 3 days (5.5 versus 3.3, P = .001). CONCLUSIONS AI-LAB was successfully used to test an external COVID-19 CXR AI algorithm on local data with relative ease, showing generalizability of the PXS score model. For AI models to scale and be clinically useful, software tools that facilitate the local testing process, like the freely available AI-LAB, will be important to cross the AI implementation gap in health care systems.
Collapse
Affiliation(s)
- Ali Ardestani
- Department of Radiology, Lahey Hospital and Medical Center, Tufts Medical School, Burlington, Massachusetts
| | - Matthew D Li
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Pauley Chea
- Department of Radiology, Lahey Hospital and Medical Center, Tufts Medical School, Burlington, Massachusetts
| | - Jeremy R Wortman
- Vice Chair, Research and Radiology Residency Program Director, Department of Radiology, Lahey Hospital and Medical Center, Tufts Medical School, Burlington, Massachusetts
| | - Adam Medina
- Department of Radiology, Lahey Hospital and Medical Center, Tufts Medical School, Burlington, Massachusetts
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Christoph Wald
- Chair, Department of Radiology, Lahey Hospital and Medical Center, Tufts Medical School, Burlington, Massachusetts; and Chair, Informatics Commission, ACR.
| |
Collapse
|
47
|
Li H, Whitney HM, Ji Y, Edwards A, Papaioannou J, Liu P, Giger ML. Impact of continuous learning on diagnostic breast MRI AI: evaluation on an independent clinical dataset. J Med Imaging (Bellingham) 2022; 9:034502. [DOI: 10.1117/1.jmi.9.3.034502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 05/12/2022] [Indexed: 11/14/2022] Open
Affiliation(s)
- Hui Li
- University of Chicago, Department of Radiology, Chicago, Illinois
| | | | - Yu Ji
- Tianjin Medical University, Tianjin Medical University Cancer Institute and Hospital, National Clini
| | | | - John Papaioannou
- University of Chicago, Department of Radiology, Chicago, Illinois
| | - Peifang Liu
- Tianjin Medical University, Tianjin Medical University Cancer Institute and Hospital, National Clini
| | | |
Collapse
|
48
|
Diao K, Chen Y, Liu Y, Chen BJ, Li WJ, Zhang L, Qu YL, Zhang T, Zhang Y, Wu M, Li K, Song B. Diagnostic study on clinical feasibility of an AI-based diagnostic system as a second reader on mobile CT images: a preliminary result. ANNALS OF TRANSLATIONAL MEDICINE 2022; 10:668. [PMID: 35845492 PMCID: PMC9279799 DOI: 10.21037/atm-22-2157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 06/06/2022] [Indexed: 02/05/2023]
Abstract
Background Artificial intelligence (AI) has breathed new life into the lung nodules detection and diagnosis. However, whether the output information from AI will translate into benefits for clinical workflow or patient outcomes in a real-world setting remains unknown. This study was to demonstrate the feasibility of an AI-based diagnostic system deployed as a second reader in imaging interpretation for patients screened for pulmonary abnormalities in a clinical setting. Methods The study included patients from a lung cancer screening program conducted in Sichuan Province, China using a mobile computed tomography (CT) scanner which traveled to medium-size cities between July 10th, 2020 and September 10th, 2020. Cases that were suspected to have malignant nodules by junior radiologists, senior radiologists or AI were labeled a high risk (HR) tag as HR-junior, HR-senior and HR-AI, respectively, and included into final analysis. The diagnosis efficacy of the AI was evaluated by calculating negative predictive value and positive predictive value when referring to the senior readers’ final results as the gold standard. Besides, characteristics of the lesions were compared among cases with different HR labels. Results In total, 251/3,872 patients (6.48%, male/female: 91/160, median age, 66 years) with HR lung nodules were included. The AI algorithm achieved a negative predictive value of 88.2% [95% confidence interval (CI): 62.2–98.0%] and a positive predictive value of 55.6% (95% CI: 49.0–62.0%). The diagnostic duration was significantly reduced when AI was used as a second reader (223±145.6 vs. 270±143.17 s, P<0.001). The information yielded by AI affected the radiologist’s decision-making in 35/145 cases. Lesions of HR cases had a higher volume [309.9 (214.9–732.5) vs. 141.3 (79.3–380.8) mm3, P<0.001], lower average CT number [−511.0 (−576.5 to −100.5) vs. −191.5 (−487.3 to 22.5), P=0.010], and pure ground glass opacity rather than solid. Conclusions The AI algorithm had high negative predictive value but low positive predictive value in diagnosing HR lung lesions in a clinical setting. Deploying AI as a second reader could help avoid missed diagnoses, reduce diagnostic duration, and strengthen diagnostic confidence for radiologists.
Collapse
Affiliation(s)
- Kaiyue Diao
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Yuntian Chen
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Ying Liu
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Bo-Jiang Chen
- Department of Respiratory Critical Care Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Wan-Jiang Li
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Lin Zhang
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Ya-Li Qu
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Tong Zhang
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Yun Zhang
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Min Wu
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China.,Huaxi MR Research Center, Functional and Molecular Imaging Key Laboratory of Sichuan Province, West China Hospital, Sichuan University, Chengdu, China
| | - Kang Li
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China.,Med-X Center for Informatics, Sichuan University, Chengdu, China
| | - Bin Song
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China.,Department of Radiology, Sanya People's Hospital (West China Sanya Hospital of Sichuan University), Chengdu, China
| |
Collapse
|
49
|
Barragán-Montero A, Bibal A, Dastarac MH, Draguet C, Valdés G, Nguyen D, Willems S, Vandewinckele L, Holmström M, Löfman F, Souris K, Sterpin E, Lee JA. Towards a safe and efficient clinical implementation of machine learning in radiation oncology by exploring model interpretability, explainability and data-model dependency. Phys Med Biol 2022; 67:10.1088/1361-6560/ac678a. [PMID: 35421855 PMCID: PMC9870296 DOI: 10.1088/1361-6560/ac678a] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 04/14/2022] [Indexed: 01/26/2023]
Abstract
The interest in machine learning (ML) has grown tremendously in recent years, partly due to the performance leap that occurred with new techniques of deep learning, convolutional neural networks for images, increased computational power, and wider availability of large datasets. Most fields of medicine follow that popular trend and, notably, radiation oncology is one of those that are at the forefront, with already a long tradition in using digital images and fully computerized workflows. ML models are driven by data, and in contrast with many statistical or physical models, they can be very large and complex, with countless generic parameters. This inevitably raises two questions, namely, the tight dependence between the models and the datasets that feed them, and the interpretability of the models, which scales with its complexity. Any problems in the data used to train the model will be later reflected in their performance. This, together with the low interpretability of ML models, makes their implementation into the clinical workflow particularly difficult. Building tools for risk assessment and quality assurance of ML models must involve then two main points: interpretability and data-model dependency. After a joint introduction of both radiation oncology and ML, this paper reviews the main risks and current solutions when applying the latter to workflows in the former. Risks associated with data and models, as well as their interaction, are detailed. Next, the core concepts of interpretability, explainability, and data-model dependency are formally defined and illustrated with examples. Afterwards, a broad discussion goes through key applications of ML in workflows of radiation oncology as well as vendors' perspectives for the clinical implementation of ML.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Adrien Bibal
- PReCISE, NaDI Institute, Faculty of Computer Science, UNamur and CENTAL, ILC, UCLouvain, Belgium
| | - Margerie Huet Dastarac
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Camille Draguet
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, United States of America
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, United States of America
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| |
Collapse
|
50
|
Liu Y, Ye F, Wang Y, Zheng X, Huang Y, Zhou J. Elaboration and Validation of a Nomogram Based on Axillary Ultrasound and Tumor Clinicopathological Features to Predict Axillary Lymph Node Metastasis in Patients With Breast Cancer. Front Oncol 2022; 12:845334. [PMID: 35651796 PMCID: PMC9148964 DOI: 10.3389/fonc.2022.845334] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 04/12/2022] [Indexed: 01/02/2023] Open
Abstract
Background This study aimed at constructing a nomogram to predict axillary lymph node metastasis (ALNM) based on axillary ultrasound and tumor clinicopathological features. Methods A retrospective analysis of 281 patients with pathologically confirmed breast cancer was performed between January 2015 and March 2018. All patients were randomly divided into a training cohort (n = 197) and a validation cohort (n = 84). Univariate and multivariable logistic regression analyses were performed to identify the clinically important predictors of ALNM when developin1 g the nomogram. The area under the curve (AUC), calibration plots, and decision curve analysis (DCA) were used to assess the discrimination, calibration, and clinical utility of the nomogram. Results In univariate and multivariate analyses, lymphovascular invasion (LVI), axillary lymph node (ALN) cortex thickness, and an obliterated ALN fatty hilum were identified as independent predictors and integrated to develop a nomogram for predicting ALNM. The nomogram showed favorable sensitivity for ALNM with AUCs of 0.87 (95% confidence interval (CI), 0.81–0.92) and 0.84 (95% CI, 0.73–0.92) in the training and validation cohorts, respectively. The calibration plots of the nomogram showed good agreement between the nomogram prediction and actual ALNM diagnosis (P > 0.05). Decision curve analysis (DCA) revealed the net benefit of the nomogram. Conclusions This study developed a nomogram based on three daily available clinical parameters, with good accuracy and clinical utility, which may help the radiologist in decision-making for ultrasound-guided fine needle aspiration cytology/biopsy (US-FNAC/B) according to the nomogram score.
Collapse
Affiliation(s)
- Yubo Liu
- Department of Ultrasound, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Department of Ultrasound, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Feng Ye
- Department of Breast Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Department of Breast Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yun Wang
- Department of Ultrasound, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Department of Ultrasound, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Xueyi Zheng
- Department of Ultrasound, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Department of Ultrasound, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yini Huang
- Department of Ultrasound, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Department of Ultrasound, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Jianhua Zhou
- Department of Ultrasound, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Department of Ultrasound, Sun Yat-Sen University Cancer Center, Guangzhou, China
- *Correspondence: Jianhua Zhou,
| |
Collapse
|