1
|
Bennani-Baiti BI, Weber M, Bernathova M, Clauser P, Kapetas P, Pinker K, Woitek R, Helbich T, Baltzer PTA. Pilot study: A simple CAD-based tool to detect breast cancer on MRI of the breast. Magn Reson Imaging 2024; 110:1-6. [PMID: 38479541 DOI: 10.1016/j.mri.2024.03.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/06/2024] [Accepted: 03/10/2024] [Indexed: 04/01/2024]
Abstract
PURPOSE This pilot-study aims to assess, whether quantitatively assessed enhancing breast tissue as a percentage of the entire breast volume can serve as an indicator of breast cancer at breast MRI and whether the contrast-agent employed affects diagnostic efficacy. MATERIALS This retrospective IRB-approved study, included 39 consecutive patients, that underwent two subsequent breast MRI exams for suspicious findings at conventional imaging with 0.1 mmol/kg gadobenic and gadoteric acid. Two independent readers, blinded to the histopathological outcome, assessed unenhanced and early post-contrast images using computer-assisted software (Brevis, Siemens Healthcare). Diagnostic performance was statistically determined for percentage of ipsilateral voxel volume enhancement and for percentage of contralateral enhancing voxel volume subtracted from ipsilateral enhancing voxel volume after crosstabulation with the dichotomized histological outcome (benign/malignant). RESULTS Ipsilateral enhancing voxel volume versus histopathological outcome resulted in an AUC of 0.707 and 0.687 for gadobenic acid, reader 1 and 2, respectively and in an AUC of 0.778 and 0.773 for gadoteric acid, reader 1 and 2, respectively. Accounting for background parenchymal enhancement by subtracting contralateral enhancing volume from ipsilateral enhancing voxel volume versus histolopathological outcome resulted in an AUC of 0.793 and 0.843 for gadobenic acid, reader 1 and 2, respectively and in an AUC of 0.692 and 0.662 for gadoteric acid, reader 1 and 2, respectively. Pairwise testing yielded no statistically significant difference both between readers and between contrast agents employed (p > 0.05). CONCLUSION Our proposed CAD algorithm, which quantitatively assesses enhancing breast tissue as a percentage of the entire breast volume, allows indicating the presence of breast cancer.
Collapse
Affiliation(s)
- Barbara I Bennani-Baiti
- Karl Landsteiner University of Health Sciences, Dr. Karl-Dorrek-Straße 30, Krems 3500, Austria; Department of Radiology, University Hospital Krems, Mitterweg 10, Krems 3500, Austria; Division of General and Pediatric Radiology, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Währinger Gürtel 18-20, Vienna 1090, Austria.
| | - Michael Weber
- Karl Landsteiner University of Health Sciences, Dr. Karl-Dorrek-Straße 30, Krems 3500, Austria
| | - Maria Bernathova
- Karl Landsteiner University of Health Sciences, Dr. Karl-Dorrek-Straße 30, Krems 3500, Austria
| | - Paola Clauser
- Karl Landsteiner University of Health Sciences, Dr. Karl-Dorrek-Straße 30, Krems 3500, Austria
| | - Panagiotis Kapetas
- Karl Landsteiner University of Health Sciences, Dr. Karl-Dorrek-Straße 30, Krems 3500, Austria; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Katja Pinker
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Ramona Woitek
- Medical Image Analysis and AI (MIAAI), Danube Private University, Krems 3500, Austria
| | - Thomas Helbich
- Karl Landsteiner University of Health Sciences, Dr. Karl-Dorrek-Straße 30, Krems 3500, Austria
| | - Pascal T A Baltzer
- Karl Landsteiner University of Health Sciences, Dr. Karl-Dorrek-Straße 30, Krems 3500, Austria
| |
Collapse
|
2
|
Walston SL, Seki H, Takita H, Mitsuyama Y, Sato S, Hagiwara A, Ito R, Hanaoka S, Miki Y, Ueda D. Data set terminology of deep learning in medicine: a historical review and recommendation. Jpn J Radiol 2024:10.1007/s11604-024-01608-1. [PMID: 38856878 DOI: 10.1007/s11604-024-01608-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 05/31/2024] [Indexed: 06/11/2024]
Abstract
Medicine and deep learning-based artificial intelligence (AI) engineering represent two distinct fields each with decades of published history. The current rapid convergence of deep learning and medicine has led to significant advancements, yet it has also introduced ambiguity regarding data set terms common to both fields, potentially leading to miscommunication and methodological discrepancies. This narrative review aims to give historical context for these terms, accentuate the importance of clarity when these terms are used in medical deep learning contexts, and offer solutions to mitigate misunderstandings by readers from either field. Through an examination of historical documents, including articles, writing guidelines, and textbooks, this review traces the divergent evolution of terms for data sets and their impact. Initially, the discordant interpretations of the word 'validation' in medical and AI contexts are explored. We then show that in the medical field as well, terms traditionally used in the deep learning domain are becoming more common, with the data for creating models referred to as the 'training set', the data for tuning of parameters referred to as the 'validation (or tuning) set', and the data for the evaluation of models as the 'test set'. Additionally, the test sets used for model evaluation are classified into internal (random splitting, cross-validation, and leave-one-out) sets and external (temporal and geographic) sets. This review then identifies often misunderstood terms and proposes pragmatic solutions to mitigate terminological confusion in the field of deep learning in medicine. We support the accurate and standardized description of these data sets and the explicit definition of data set splitting terminologies in each publication. These are crucial methods for demonstrating the robustness and generalizability of deep learning applications in medicine. This review aspires to enhance the precision of communication, thereby fostering more effective and transparent research methodologies in this interdisciplinary field.
Collapse
Affiliation(s)
- Shannon L Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Hiroshi Seki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Hirotaka Takita
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Yasuhito Mitsuyama
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Shingo Sato
- Sidney Kimmel Cancer Center, Thomas Jefferson University, Philadelphia, PA, USA
| | - Akifumi Hagiwara
- Department of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University, Nagoya, Japan
| | - Shouhei Hanaoka
- Department of Radiology, University of Tokyo Hospital, Tokyo, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan.
- Department of Artificial Intelligence, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan.
- Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan.
| |
Collapse
|
3
|
Botnari A, Kadar M, Patrascu JM. A Comprehensive Evaluation of Deep Learning Models on Knee MRIs for the Diagnosis and Classification of Meniscal Tears: A Systematic Review and Meta-Analysis. Diagnostics (Basel) 2024; 14:1090. [PMID: 38893617 PMCID: PMC11172202 DOI: 10.3390/diagnostics14111090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Revised: 05/19/2024] [Accepted: 05/21/2024] [Indexed: 06/21/2024] Open
Abstract
OBJECTIVES This study delves into the cutting-edge field of deep learning techniques, particularly deep convolutional neural networks (DCNNs), which have demonstrated unprecedented potential in assisting radiologists and orthopedic surgeons in precisely identifying meniscal tears. This research aims to evaluate the effectiveness of deep learning models in recognizing, localizing, describing, and categorizing meniscal tears in magnetic resonance images (MRIs). MATERIALS AND METHODS This systematic review was rigorously conducted, strictly following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Extensive searches were conducted on MEDLINE (PubMed), Web of Science, Cochrane Library, and Google Scholar. All identified articles underwent a comprehensive risk of bias analysis. Predictive performance values were either extracted or calculated for quantitative analysis, including sensitivity and specificity. The meta-analysis was performed for all prediction models that identified the presence and location of meniscus tears. RESULTS This study's findings underscore that a range of deep learning models exhibit robust performance in detecting and classifying meniscal tears, in one case surpassing the expertise of musculoskeletal radiologists. Most studies in this review concentrated on identifying tears in the medial or lateral meniscus and even precisely locating tears-whether in the anterior or posterior horn-with exceptional accuracy, as demonstrated by AUC values ranging from 0.83 to 0.94. CONCLUSIONS Based on these findings, deep learning models have showcased significant potential in analyzing knee MR images by learning intricate details within images. They offer precise outcomes across diverse tasks, including segmenting specific anatomical structures and identifying pathological regions. Contributions: This study focused exclusively on DL models for identifying and localizing meniscus tears. It presents a meta-analysis that includes eight studies for detecting the presence of a torn meniscus and a meta-analysis of three studies with low heterogeneity that localize and classify the menisci. Another novelty is the analysis of arthroscopic surgery as ground truth. The quality of the studies was assessed against the CLAIM checklist, and the risk of bias was determined using the QUADAS-2 tool.
Collapse
Affiliation(s)
- Alexei Botnari
- Department of Orthopedics, Faculty of Medicine, “Victor Babes” University of Medicine and Pharmacy, 300041 Timisoara, Romania
| | - Manuella Kadar
- Department of Computer Science, Faculty of Informatics and Engineering, “1 Decembrie 1918” University of Alba Iulia, 510009 Alba Iulia, Romania
| | - Jenel Marian Patrascu
- Department of Orthopedics-Traumatology, Faculty of Medicine, “Victor Babes” University of Medicine and Pharmacy, 300041 Timisoara, Romania;
| |
Collapse
|
4
|
Ni R, Han K, Haibe-Kains B, Rink A. Generalizability of deep learning in organ-at-risk segmentation: A transfer learning study in cervical brachytherapy. Radiother Oncol 2024; 197:110332. [PMID: 38763356 DOI: 10.1016/j.radonc.2024.110332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 05/02/2024] [Accepted: 05/03/2024] [Indexed: 05/21/2024]
Abstract
PURPOSE Deep learning can automate delineation in radiation therapy, reducing time and variability. Yet, its efficacy varies across different institutions, scanners, or settings, emphasizing the need for adaptable and robust models in clinical environments. Our study demonstrates the effectiveness of the transfer learning (TL) approach in enhancing the generalizability of deep learning models for auto-segmentation of organs-at-risk (OARs) in cervical brachytherapy. METHODS A pre-trained model was developed using 120 scans with ring and tandem applicator on a 3T magnetic resonance (MR) scanner (RT3). Four OARs were segmented and evaluated. Segmentation performance was evaluated by Volumetric Dice Similarity Coefficient (vDSC), 95 % Hausdorff Distance (HD95), surface DSC, and Added Path Length (APL). The model was fine-tuned on three out-of-distribution target groups. Pre- and post-TL outcomes, and influence of number of fine-tuning scans, were compared. A model trained with one group (Single) and a model trained with all four groups (Mixed) were evaluated on both seen and unseen data distributions. RESULTS TL enhanced segmentation accuracy across target groups, matching the pre-trained model's performance. The first five fine-tuning scans led to the most noticeable improvements, with performance plateauing with more data. TL outperformed training-from-scratch given the same training data. The Mixed model performed similarly to the Single model on RT3 scans but demonstrated superior performance on unseen data. CONCLUSIONS TL can improve a model's generalizability for OAR segmentation in MR-guided cervical brachytherapy, requiring less fine-tuning data and reduced training time. These results provide a foundation for developing adaptable models to accommodate clinical settings.
Collapse
Affiliation(s)
- Ruiyan Ni
- Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Kathy Han
- Princess Margaret Cancer Center, University Health Network, Toronto, CA, Canada; Department of Radiation Oncology, University of Toronto, Toronto, CA, Canada
| | - Benjamin Haibe-Kains
- Department of Medical Biophysics, University of Toronto, Toronto, Canada; Princess Margaret Cancer Center, University Health Network, Toronto, CA, Canada; Vector Institute, Toronto, Toronto, CA, Canada.
| | - Alexandra Rink
- Department of Medical Biophysics, University of Toronto, Toronto, Canada; Princess Margaret Cancer Center, University Health Network, Toronto, CA, Canada; Department of Radiation Oncology, University of Toronto, Toronto, CA, Canada.
| |
Collapse
|
5
|
Cho SJ, Cho W, Choi D, Sim G, Jeong SY, Baik SH, Bae YJ, Choi BS, Kim JH, Yoo S, Han JH, Kim CY, Choo J, Sunwoo L. Prediction of treatment response after stereotactic radiosurgery of brain metastasis using deep learning and radiomics on longitudinal MRI data. Sci Rep 2024; 14:11085. [PMID: 38750084 PMCID: PMC11096355 DOI: 10.1038/s41598-024-60781-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 04/26/2024] [Indexed: 05/18/2024] Open
Abstract
We developed artificial intelligence models to predict the brain metastasis (BM) treatment response after stereotactic radiosurgery (SRS) using longitudinal magnetic resonance imaging (MRI) data and evaluated prediction accuracy changes according to the number of sequential MRI scans. We included four sequential MRI scans for 194 patients with BM and 369 target lesions for the Developmental dataset. The data were randomly split (8:2 ratio) for training and testing. For external validation, 172 MRI scans from 43 patients with BM and 62 target lesions were additionally enrolled. The maximum axial diameter (Dmax), radiomics, and deep learning (DL) models were generated for comparison. We evaluated the simple convolutional neural network (CNN) model and a gated recurrent unit (Conv-GRU)-based CNN model in the DL arm. The Conv-GRU model performed superior to the simple CNN models. For both datasets, the area under the curve (AUC) was significantly higher for the two-dimensional (2D) Conv-GRU model than for the 3D Conv-GRU, Dmax, and radiomics models. The accuracy of the 2D Conv-GRU model increased with the number of follow-up studies. In conclusion, using longitudinal MRI data, the 2D Conv-GRU model outperformed all other models in predicting the treatment response after SRS of BM.
Collapse
Affiliation(s)
- Se Jin Cho
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Wonwoo Cho
- Kim Jaechul Graduate School of Artificial Intelligence, KAIST, 291 Daehak-Ro, Yuseong-Gu, Daejeon, 34141, Republic of Korea
- Letsur Inc, 180 Yeoksam-Ro, Gangnam-Gu, Seoul, 06248, Republic of Korea
| | - Dongmin Choi
- Kim Jaechul Graduate School of Artificial Intelligence, KAIST, 291 Daehak-Ro, Yuseong-Gu, Daejeon, 34141, Republic of Korea
- Letsur Inc, 180 Yeoksam-Ro, Gangnam-Gu, Seoul, 06248, Republic of Korea
| | - Gyuhyeon Sim
- Kim Jaechul Graduate School of Artificial Intelligence, KAIST, 291 Daehak-Ro, Yuseong-Gu, Daejeon, 34141, Republic of Korea
- Letsur Inc, 180 Yeoksam-Ro, Gangnam-Gu, Seoul, 06248, Republic of Korea
| | - So Yeong Jeong
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Sung Hyun Baik
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Yun Jung Bae
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Byung Se Choi
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Jae Hyoung Kim
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Sooyoung Yoo
- Office of eHealth Research and Business, Seoul National University Bundang Hospital, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Jung Ho Han
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Chae-Yong Kim
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Jaegul Choo
- Kim Jaechul Graduate School of Artificial Intelligence, KAIST, 291 Daehak-Ro, Yuseong-Gu, Daejeon, 34141, Republic of Korea.
- Letsur Inc, 180 Yeoksam-Ro, Gangnam-Gu, Seoul, 06248, Republic of Korea.
| | - Leonard Sunwoo
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea.
- Center for Artificial Intelligence in Healthcare, Seoul National University Bundang Hospital, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea.
| |
Collapse
|
6
|
Walston SL, Ueda D. Enhancing AI-assisted Interpretation of Chest Radiographs: A Critical Analysis of Methods and Applicability. Radiology 2024; 311:e233428. [PMID: 38713026 DOI: 10.1148/radiol.233428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Affiliation(s)
- Shannon L Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Daiju Ueda
- Center for Health Science Innovation, Osaka Metropolitan University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| |
Collapse
|
7
|
Gennari AG, Rossi A, De Cecco CN, van Assen M, Sartoretti T, Giannopoulos AA, Schwyzer M, Huellner MW, Messerli M. Artificial intelligence in coronary artery calcium score: rationale, different approaches, and outcomes. Int J Cardiovasc Imaging 2024; 40:951-966. [PMID: 38700819 PMCID: PMC11147943 DOI: 10.1007/s10554-024-03080-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Accepted: 03/09/2024] [Indexed: 06/05/2024]
Abstract
Almost 35 years after its introduction, coronary artery calcium score (CACS) not only survived technological advances but became one of the cornerstones of contemporary cardiovascular imaging. Its simplicity and quantitative nature established it as one of the most robust approaches for atherosclerotic cardiovascular disease risk stratification in primary prevention and a powerful tool to guide therapeutic choices. Groundbreaking advances in computational models and computer power translated into a surge of artificial intelligence (AI)-based approaches directly or indirectly linked to CACS analysis. This review aims to provide essential knowledge on the AI-based techniques currently applied to CACS, setting the stage for a holistic analysis of the use of these techniques in coronary artery calcium imaging. While the focus of the review will be detailing the evidence, strengths, and limitations of end-to-end CACS algorithms in electrocardiography-gated and non-gated scans, the current role of deep-learning image reconstructions, segmentation techniques, and combined applications such as simultaneous coronary artery calcium and pulmonary nodule segmentation, will also be discussed.
Collapse
Affiliation(s)
- Antonio G Gennari
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, Zurich, 8091, Switzerland
- University of Zurich, Zurich, Switzerland
| | - Alexia Rossi
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, Zurich, 8091, Switzerland
- University of Zurich, Zurich, Switzerland
| | - Carlo N De Cecco
- Division of Cardiothoracic Imaging, Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
- Translational Laboratory for Cardiothoracic Imaging and Artificial Intelligence, Emory University, Atlanta, GA, USA
| | - Marly van Assen
- Translational Laboratory for Cardiothoracic Imaging and Artificial Intelligence, Emory University, Atlanta, GA, USA
| | - Thomas Sartoretti
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, Zurich, 8091, Switzerland
- University of Zurich, Zurich, Switzerland
| | - Andreas A Giannopoulos
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, Zurich, 8091, Switzerland
| | - Moritz Schwyzer
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, Zurich, 8091, Switzerland
- University of Zurich, Zurich, Switzerland
| | - Martin W Huellner
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, Zurich, 8091, Switzerland
- University of Zurich, Zurich, Switzerland
| | - Michael Messerli
- Department of Nuclear Medicine, University Hospital Zurich, Rämistrasse 100, Zurich, 8091, Switzerland.
- University of Zurich, Zurich, Switzerland.
| |
Collapse
|
8
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Dos Santos DP, Tang A, Wald C, Slavotinek J. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement From the ACR, CAR, ESR, RANZCR & RSNA. Can Assoc Radiol J 2024; 75:226-244. [PMID: 38251882 DOI: 10.1177/08465371231222229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024] Open
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever‑growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi‑society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, AL, USA
- Data Science Institute, American College of Radiology, Reston, VA, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, CA, USA
- Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, CA, USA
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, SA, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montréal, QC, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, MA, USA
- Tufts University Medical School, Boston, MA, USA
- American College of Radiology, Reston, VA, USA
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, SA, Australia
- College of Medicine and Public Health, Flinders University, Adelaide, SA, Australia
| |
Collapse
|
9
|
Elfer K, Gardecki E, Garcia V, Ly A, Hytopoulos E, Wen S, Hanna MG, Peeters DJE, Saltz J, Ehinger A, Dudgeon SN, Li X, Blenman KRM, Chen W, Green U, Birmingham R, Pan T, Lennerz JK, Salgado R, Gallas BD. Reproducible Reporting of the Collection and Evaluation of Annotations for Artificial Intelligence Models. Mod Pathol 2024; 37:100439. [PMID: 38286221 DOI: 10.1016/j.modpat.2024.100439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 12/14/2023] [Accepted: 01/21/2024] [Indexed: 01/31/2024]
Abstract
This work puts forth and demonstrates the utility of a reporting framework for collecting and evaluating annotations of medical images used for training and testing artificial intelligence (AI) models in assisting detection and diagnosis. AI has unique reporting requirements, as shown by the AI extensions to the Consolidated Standards of Reporting Trials (CONSORT) and Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) checklists and the proposed AI extensions to the Standards for Reporting Diagnostic Accuracy (STARD) and Transparent Reporting of a Multivariable Prediction model for Individual Prognosis or Diagnosis (TRIPOD) checklists. AI for detection and/or diagnostic image analysis requires complete, reproducible, and transparent reporting of the annotations and metadata used in training and testing data sets. In an earlier work by other researchers, an annotation workflow and quality checklist for computational pathology annotations were proposed. In this manuscript, we operationalize this workflow into an evaluable quality checklist that applies to any reader-interpreted medical images, and we demonstrate its use for an annotation effort in digital pathology. We refer to this quality framework as the Collection and Evaluation of Annotations for Reproducible Reporting of Artificial Intelligence (CLEARR-AI).
Collapse
Affiliation(s)
- Katherine Elfer
- United States Food and Drug Administration, Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, Division of Imaging Diagnostics and Software Reliability, Silver Spring, Maryland; National Institutes of Health, National Cancer Institute, Division of Cancer Prevention, Cancer Prevention Fellowship Program, Bethesda, Maryland.
| | - Emma Gardecki
- United States Food and Drug Administration, Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, Division of Imaging Diagnostics and Software Reliability, Silver Spring, Maryland
| | - Victor Garcia
- United States Food and Drug Administration, Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, Division of Imaging Diagnostics and Software Reliability, Silver Spring, Maryland
| | - Amy Ly
- Department of Pathology, Massachusetts General Hospital, Boston, Massachusetts
| | | | - Si Wen
- United States Food and Drug Administration, Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, Division of Imaging Diagnostics and Software Reliability, Silver Spring, Maryland
| | - Matthew G Hanna
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Dieter J E Peeters
- Department of Pathology, University Hospital Antwerp/University of Antwerp, Antwerp, Belgium; Department of Pathology, Sint-Maarten Hospital, Mechelen, Belgium
| | - Joel Saltz
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York
| | - Anna Ehinger
- Department of Clinical Genetics, Pathology and Molecular Diagnostics, Laboratory Medicine, Lund University, Lund, Sweden
| | - Sarah N Dudgeon
- Department of Laboratory Medicine, Yale School of Medicine, New Haven, Connecticut
| | - Xiaoxian Li
- Department of Pathology and Laboratory Medicine, Emory University School of Medicine, Atlanta, Georgia
| | - Kim R M Blenman
- Department of Internal Medicine, Section of Medical Oncology, Yale School of Medicine and Yale Cancer Center, Yale University, New Haven, Connecticut; Department of Computer Science, School of Engineering and Applied Science, Yale University, New Haven, Connecticut
| | - Weijie Chen
- United States Food and Drug Administration, Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, Division of Imaging Diagnostics and Software Reliability, Silver Spring, Maryland
| | - Ursula Green
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, Georgia
| | - Ryan Birmingham
- United States Food and Drug Administration, Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, Division of Imaging Diagnostics and Software Reliability, Silver Spring, Maryland; Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, Georgia
| | - Tony Pan
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, Georgia
| | - Jochen K Lennerz
- Department of Pathology, Center for Integrated Diagnostics, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Roberto Salgado
- Division of Research, Peter Mac Callum Cancer Centre, Melbourne, Australia; Department of Pathology, GZA-ZNA Hospitals, Antwerp, Belgium
| | - Brandon D Gallas
- United States Food and Drug Administration, Center for Devices and Radiological Health, Office of Science and Engineering Laboratories, Division of Imaging Diagnostics and Software Reliability, Silver Spring, Maryland
| |
Collapse
|
10
|
Yin X, Wang K, Wang L, Yang Z, Zhang Y, Wu P, Zhao C, Zhang J. Algorithms for classification of sequences and segmentation of prostate gland: an external validation study. Abdom Radiol (NY) 2024; 49:1275-1287. [PMID: 38436698 DOI: 10.1007/s00261-024-04241-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Revised: 02/05/2024] [Accepted: 02/05/2024] [Indexed: 03/05/2024]
Abstract
OBJECTIVES The aim of the study was to externally validate two AI models for the classification of prostate mpMRI sequences and segmentation of the prostate gland on T2WI. MATERIALS AND METHODS MpMRI data from 719 patients were retrospectively collected from two hospitals, utilizing nine MR scanners from four different vendors, over the period from February 2018 to May 2022. Med3D deep learning pretrained architecture was used to perform image classification,UNet-3D was used to segment the prostate gland. The images were classified into one of nine image types by the mode. The segmentation model was validated using T2WI images. The accuracy of the segmentation was evaluated by measuring the DSC, VS,AHD.Finally,efficacy of the models was compared for different MR field strengths and sequences. RESULTS 20,551 image groups were obtained from 719 MR studies. The classification model accuracy is 99%, with a kappa of 0.932. The precision, recall, and F1 values for the nine image types had statistically significant differences, respectively (all P < 0.001). The accuracy for scanners 1.436 T, 1.5 T, and 3.0 T was 87%, 86%, and 98%, respectively (P < 0.001). For segmentation model, the median DSC was 0.942 to 0.955, the median VS was 0.974 to 0.982, and the median AHD was 5.55 to 6.49 mm,respectively.These values also had statistically significant differences for the three different magnetic field strengths (all P < 0.001). CONCLUSION The AI models for mpMRI image classification and prostate segmentation demonstrated good performance during external validation, which could enhance efficiency in prostate volume measurement and cancer detection with mpMRI. CLINICAL RELEVANCE STATEMENT These models can greatly improve the work efficiency in cancer detection, measurement of prostate volume and guided biopsies.
Collapse
Affiliation(s)
- Xuemei Yin
- Department of Medical Imaging, First Hospital of Qinhuangdao, 066000, Qinhuangdao City, Hebei Province, China
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, 100050, Beijing, China
| | - Kexin Wang
- School of Basic Medical Sciences, Capital Medical University, 100052, Beijing, China
| | - Liang Wang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, 100050, Beijing, China
| | - Zhenghan Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, 100050, Beijing, China
| | - Yaofeng Zhang
- Beijing Smart Tree Medical Technology Co. Ltd, 100011, Beijing, China
| | - Pengsheng Wu
- Beijing Smart Tree Medical Technology Co. Ltd, 100011, Beijing, China
| | - Chenglin Zhao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, 100050, Beijing, China.
| | - Jun Zhang
- Department of Medical Imaging, First Hospital of Qinhuangdao, 066000, Qinhuangdao City, Hebei Province, China.
| |
Collapse
|
11
|
Lin C, Kuo FC, Chau T, Shih JH, Lin CS, Chen CC, Lee CC, Lin SH. Artificial intelligence-enabled electrocardiography contributes to hyperthyroidism detection and outcome prediction. COMMUNICATIONS MEDICINE 2024; 4:42. [PMID: 38472334 DOI: 10.1038/s43856-024-00472-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 03/01/2024] [Indexed: 03/14/2024] Open
Abstract
BACKGROUND Hyperthyroidism is frequently under-recognized and leads to heart failure and mortality. Timely identification of high-risk patients is a prerequisite to effective antithyroid therapy. Since the heart is very sensitive to hyperthyroidism and its electrical signature can be demonstrated by electrocardiography, we developed an artificial intelligence model to detect hyperthyroidism by electrocardiography and examined its potential for outcome prediction. METHODS The deep learning model was trained using a large dataset of 47,245 electrocardiograms from 33,246 patients at an academic medical center. Patients were included if electrocardiograms and measurements of serum thyroid-stimulating hormone were available that had been obtained within a three day period. Serum thyroid-stimulating hormone and free thyroxine were used to define overt and subclinical hyperthyroidism. We tested the model internally using 14,420 patients and externally using two additional test sets comprising 11,498 and 596 patients, respectively. RESULTS The performance of the deep learning model achieves areas under the receiver operating characteristic curves (AUCs) of 0.725-0.761 for hyperthyroidism detection, AUCs of 0.867-0.876 for overt hyperthyroidism, and AUC of 0.631-0.701 for subclinical hyperthyroidism, superior to a traditional features-based machine learning model. Patients identified as hyperthyroidism-positive by the deep learning model have a significantly higher risk (1.97-2.94 fold) of all-cause mortality and new-onset heart failure compared to hyperthyroidism-negative patients. This cardiovascular disease stratification is particularly pronounced in subclinical hyperthyroidism, surpassing that observed in overt hyperthyroidism. CONCLUSIONS An innovative algorithm effectively identifies overt and subclinical hyperthyroidism and contributes to cardiovascular risk assessment.
Collapse
Affiliation(s)
- Chin Lin
- School of Medicine, National Defense Medical Center, Taipei, Taiwan ROC
- Graduate Institute of Aerospace and Undersea Medicine, National Defense Medical Center, Taipei, Taiwan ROC
| | - Feng-Chih Kuo
- Division of Endocrinology and Metabolism, Department of Internal Medicine, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan ROC
| | - Tom Chau
- Department of Medicine, Providence St. Vincent Medical Center, Portland, OR, USA
| | - Jui-Hu Shih
- Department of Pharmacy Practice, Tri-Service General Hospital, Taipei, Taiwan ROC
- School of Pharmacy, National Defense Medical Center, Taipei, Taiwan ROC
| | - Chin-Sheng Lin
- Division of Cardiology, Department of Internal Medicine, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan ROC
| | - Chien-Chou Chen
- Division of Nephrology, Department of Medicine, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan ROC
| | - Chia-Cheng Lee
- Department of Medical Informatics, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan ROC
- Division of Colorectal Surgery, Department of Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan ROC
| | - Shih-Hua Lin
- Division of Nephrology, Department of Medicine, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan ROC.
| |
Collapse
|
12
|
Santomartino SM, Kung J, Yi PH. Systematic review of artificial intelligence development and evaluation for MRI diagnosis of knee ligament or meniscus tears. Skeletal Radiol 2024; 53:445-454. [PMID: 37584757 DOI: 10.1007/s00256-023-04416-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 07/24/2023] [Accepted: 07/24/2023] [Indexed: 08/17/2023]
Abstract
OBJECTIVE The purpose of this systematic review was to summarize the results of original research studies evaluating the characteristics and performance of deep learning models for detection of knee ligament and meniscus tears on MRI. MATERIALS AND METHODS We searched PubMed for studies published as of February 2, 2022 for original studies evaluating development and evaluation of deep learning models for MRI diagnosis of knee ligament or meniscus tears. We summarized study details according to multiple criteria including baseline article details, model creation, deep learning details, and model evaluation. RESULTS 19 studies were included with radiology departments leading the publications in deep learning development and implementation for detecting knee injuries via MRI. Among the studies, there was a lack of standard reporting and inconsistently described development details. However, all included studies reported consistently high model performance that significantly supplemented human reader performance. CONCLUSION From our review, we found radiology departments have been leading deep learning development for injury detection on knee MRIs. Although studies inconsistently described DL model development details, all reported high model performance, indicating great promise for DL in knee MRI analysis.
Collapse
Affiliation(s)
- Samantha M Santomartino
- Drexel University College of Medicine, Philadelphia, PA, USA
- University of Maryland Medical Intelligent Imaging (UM2ii) Center, Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Justin Kung
- Department of Orthopaedic Surgery, University of South Carolina, Columbia, SC, USA
| | - Paul H Yi
- University of Maryland Medical Intelligent Imaging (UM2ii) Center, Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, University of Maryland School of Medicine, Baltimore, MD, USA.
- Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore Street First Floor Rm. 1172, Baltimore, MD, 21201, USA.
| |
Collapse
|
13
|
Starmans MPA, Miclea RL, Vilgrain V, Ronot M, Purcell Y, Verbeek J, Niessen WJ, Ijzermans JNM, de Man RA, Doukas M, Klein S, Thomeer MG. Automated Assessment of T2-Weighted MRI to Differentiate Malignant and Benign Primary Solid Liver Lesions in Noncirrhotic Livers Using Radiomics. Acad Radiol 2024; 31:870-879. [PMID: 37648580 DOI: 10.1016/j.acra.2023.07.024] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 07/06/2023] [Accepted: 07/25/2023] [Indexed: 09/01/2023]
Abstract
RATIONALE AND OBJECTIVES Distinguishing malignant from benign liver lesions based on magnetic resonance imaging (MRI) is an important but often challenging task, especially in noncirrhotic livers. We developed and externally validated a radiomics model to quantitatively assess T2-weighted MRI to distinguish the most common malignant and benign primary solid liver lesions in noncirrhotic livers. MATERIALS AND METHODS Data sets were retrospectively collected from three tertiary referral centers (A, B, and C) between 2002 and 2018. Patients with malignant (hepatocellular carcinoma and intrahepatic cholangiocarcinoma) and benign (hepatocellular adenoma and focal nodular hyperplasia) lesions were included. A radiomics model based on T2-weighted MRI was developed in data set A using a combination of machine learning approaches. The model was internally evaluated on data set A through cross-validation, externally validated on data sets B and C, and compared to visual scoring of two experienced abdominal radiologists on data set C. RESULTS The overall data set included 486 patients (A: 187, B: 98, and C: 201). The radiomics model had a mean area under the curve (AUC) of 0.78 upon internal validation on data set A and a similar AUC in external validation (B: 0.74 and C: 0.76). In data set C, the two radiologists showed moderate agreement (Cohen's κ: 0.61) and achieved AUCs of 0.86 and 0.82. CONCLUSION Our T2-weighted MRI radiomics model shows potential for distinguishing malignant from benign primary solid liver lesions. External validation indicated that the model is generalizable despite substantial MRI acquisition protocol differences. Pending further optimization and generalization, this model may aid radiologists in improving the diagnostic workup of patients with liver lesions.
Collapse
Affiliation(s)
- Martijn P A Starmans
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands (M.P.A.S., W.J.N., S.K., M.G.T.).
| | - Razvan L Miclea
- Department of Radiology and Nuclear Medicine, Maastricht UMC+, Maastricht, the Netherlands (R.L.M.)
| | - Valerie Vilgrain
- Université de Paris, INSERM U 1149, CRI, Paris, France (V.V., M.R.); Département de Radiologie, Hôpital Beaujon, APHP.Nord, Clichy, France (V.V., M.R.)
| | - Maxime Ronot
- Université de Paris, INSERM U 1149, CRI, Paris, France (V.V., M.R.); Département de Radiologie, Hôpital Beaujon, APHP.Nord, Clichy, France (V.V., M.R.)
| | - Yvonne Purcell
- Department of Radiology, Hôpital Fondation Rothschild, Paris, France (Y.P.)
| | - Jef Verbeek
- Department of Gastroenterology and Hepatology, University Hospitals Leuven, Leuven, Belgium (J.V.); Department of Gastroenterology and Hepatology, Maastricht UMC+, Maastricht, the Netherlands (J.V.)
| | - Wiro J Niessen
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands (M.P.A.S., W.J.N., S.K., M.G.T.); Faculty of Applied Sciences, Delft University of Technology, the Netherlands (W.J.N.)
| | - Jan N M Ijzermans
- Department of Surgery, Erasmus MC, Rotterdam, the Netherlands (J.N.M.I.)
| | - Rob A de Man
- Department of Gastroenterology & Hepatology, Erasmus MC, Rotterdam, the Netherlands (R.A.d.M.)
| | - Michael Doukas
- Department of Pathology, Erasmus MC, Rotterdam, the Netherlands (M.D.)
| | - Stefan Klein
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands (M.P.A.S., W.J.N., S.K., M.G.T.)
| | - Maarten G Thomeer
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands (M.P.A.S., W.J.N., S.K., M.G.T.)
| |
Collapse
|
14
|
Atzen SL. Top 10 Tips for Writing Materials and Methods in Radiology: A Brief Guide for Authors. Radiology 2024; 310:e240306. [PMID: 38501956 DOI: 10.1148/radiol.240306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/20/2024]
Affiliation(s)
- Sarah L Atzen
- From the Radiological Society of North America, 820 Jorie Blvd, Oak Brook, IL 60523
| |
Collapse
|
15
|
Boverhof BJ, Redekop WK, Bos D, Starmans MPA, Birch J, Rockall A, Visser JJ. Radiology AI Deployment and Assessment Rubric (RADAR) to bring value-based AI into radiological practice. Insights Imaging 2024; 15:34. [PMID: 38315288 PMCID: PMC10844175 DOI: 10.1186/s13244-023-01599-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 11/14/2023] [Indexed: 02/07/2024] Open
Abstract
OBJECTIVE To provide a comprehensive framework for value assessment of artificial intelligence (AI) in radiology. METHODS This paper presents the RADAR framework, which has been adapted from Fryback and Thornbury's imaging efficacy framework to facilitate the valuation of radiology AI from conception to local implementation. Local efficacy has been newly introduced to underscore the importance of appraising an AI technology within its local environment. Furthermore, the RADAR framework is illustrated through a myriad of study designs that help assess value. RESULTS RADAR presents a seven-level hierarchy, providing radiologists, researchers, and policymakers with a structured approach to the comprehensive assessment of value in radiology AI. RADAR is designed to be dynamic and meet the different valuation needs throughout the AI's lifecycle. Initial phases like technical and diagnostic efficacy (RADAR-1 and RADAR-2) are assessed pre-clinical deployment via in silico clinical trials and cross-sectional studies. Subsequent stages, spanning from diagnostic thinking to patient outcome efficacy (RADAR-3 to RADAR-5), require clinical integration and are explored via randomized controlled trials and cohort studies. Cost-effectiveness efficacy (RADAR-6) takes a societal perspective on financial feasibility, addressed via health-economic evaluations. The final level, RADAR-7, determines how prior valuations translate locally, evaluated through budget impact analysis, multi-criteria decision analyses, and prospective monitoring. CONCLUSION The RADAR framework offers a comprehensive framework for valuing radiology AI. Its layered, hierarchical structure, combined with a focus on local relevance, aligns RADAR seamlessly with the principles of value-based radiology. CRITICAL RELEVANCE STATEMENT The RADAR framework advances artificial intelligence in radiology by delineating a much-needed framework for comprehensive valuation. KEYPOINTS • Radiology artificial intelligence lacks a comprehensive approach to value assessment. • The RADAR framework provides a dynamic, hierarchical method for thorough valuation of radiology AI. • RADAR advances clinical radiology by bridging the artificial intelligence implementation gap.
Collapse
Affiliation(s)
- Bart-Jan Boverhof
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - W Ken Redekop
- Erasmus School of Health Policy and Management, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Daniel Bos
- Department of Epidemiology, Erasmus University Medical Centre, Rotterdam, The Netherlands
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | - Martijn P A Starmans
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands
| | | | - Andrea Rockall
- Department of Surgery & Cancer, Imperial College London, London, UK
| | - Jacob J Visser
- Department of Radiology & Nuclear Medicine, Erasmus University Medical Centre, Rotterdam, The Netherlands.
| |
Collapse
|
16
|
Xie LL, Gong Y, Dong KR, Shen C, Duan B, Dong R. Application of Machine Learning and Deep EfficientNets in Distinguishing Neonatal Adrenal Hematomas From Neuroblastoma in Enhanced Computed Tomography Images. World J Oncol 2024; 15:81-89. [PMID: 38274719 PMCID: PMC10807921 DOI: 10.14740/wjon1744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Accepted: 01/09/2024] [Indexed: 01/27/2024] Open
Abstract
Background The aim of the study was to employ a combination of radiomic indicators based on computed tomography (CT) imaging and machine learning (ML), along with deep learning (DL), to differentiate between adrenal hematoma and adrenal neuroblastoma in neonates. Methods A total of 76 neonates were included in this retrospective study (40 with neuroblastomas and 36 with adrenal hematomas) who underwent CT and divided into a training group (n = 38) and a testing group (n = 38). The regions of interest (ROIs) were segmented by two radiologists to extract radiomics features using Pyradiomics package. ML classifications were done using support vector machine (SVM), AdaBoost, Extra Trees, gradient boosting, multi-layer perceptron (MLP), and random forest (RF). EfficientNets was employed and classified, based on radiometrics. The area under curve (AUC) of the receiver operating characteristic (ROC) was calculated to assess the performance of each model. Results Among all features, the least absolute shrinkage and selection operator (LASSO) logistic regression selected nine features. These radiomics features were used to construct radiomics model. In the training cohort, the AUCs of SVM, MLP and Extra Trees models were 0.967, 0.969 and 1.000, respectively. The corresponding AUCs of the test cohort were 0.985, 0.971 and 0.958, respectively. In the classification task, the AUC of the DL framework was 0.987. Conclusion ML decision classifiers and DL framework constructed from CT-based radiomics features offered a non-invasive method to differentiate neonatal adrenal hematoma from neuroblastoma and performed better than the clinical experts.
Collapse
Affiliation(s)
- Lu Lu Xie
- Shanghai Institute of Infectious Disease and Biosecurity, Children’s Hospital of Fudan University, Shanghai 201102, China
- Department of Pediatric Surgery, Shanghai Key Laboratory of Birth Defect, and Key Laboratory of Neonatal Disease, Ministry of Health, Children’s Hospital of Fudan University, Shanghai 201102, China
| | - Ying Gong
- Department of Radiology, Children’s Hospital of Fudan University, Shanghai 201102, China
| | - Kui Ran Dong
- Department of Pediatric Surgery, Shanghai Key Laboratory of Birth Defect, and Key Laboratory of Neonatal Disease, Ministry of Health, Children’s Hospital of Fudan University, Shanghai 201102, China
| | - Chun Shen
- Department of Pediatric Surgery, Shanghai Key Laboratory of Birth Defect, and Key Laboratory of Neonatal Disease, Ministry of Health, Children’s Hospital of Fudan University, Shanghai 201102, China
| | - Bo Duan
- Department of Otolaryngology-Head and Neck Surgery, Children’s Hospital of Fudan University, Shanghai 201102, China
| | - Rui Dong
- Department of Pediatric Surgery, Shanghai Key Laboratory of Birth Defect, and Key Laboratory of Neonatal Disease, Ministry of Health, Children’s Hospital of Fudan University, Shanghai 201102, China
| |
Collapse
|
17
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Pinto Dos Santos D, Tang A, Wald C, Slavotinek J. Developing, purchasing, implementing and monitoring AI tools in radiology: Practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA. J Med Imaging Radiat Oncol 2024; 68:7-26. [PMID: 38259140 DOI: 10.1111/1754-9485.13612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 11/23/2023] [Indexed: 01/24/2024]
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, Alabama, USA
- American College of Radiology Data Science Institute, Reston, Virginia, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, California, USA
- Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, California, USA
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, South Australia, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montreal, Quebec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, Massachusetts, USA
- Tufts University Medical School, Boston, Massachusetts, USA
- Commision On Informatics, and Member, Board of Chancellors, American College of Radiology, Reston, Virginia, USA
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, Adelaide, South Australia, Australia
- College of Medicine and Public Health, Flinders University, Adelaide, South Australia, Australia
| |
Collapse
|
18
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Pinto Dos Santos D, Tang A, Wald C, Slavotinek J. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement From the ACR, CAR, ESR, RANZCR & RSNA. J Am Coll Radiol 2024:S1546-1440(23)01020-7. [PMID: 38276923 DOI: 10.1016/j.jacr.2023.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
Artificial intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. KEY POINTS.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, Alabama; American College of Radiology Data Science Institute, Reston, Virginia
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, California; Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, California
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, California
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany; Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, Massachusetts; Tufts University Medical School, Boston, Massachusetts; Commision on Informatics, and Member, Board of Chancellors, American College of Radiology, Virginia
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, Adelaide, Australia; College of Medicine and Public Health, Flinders University, Adelaide, Australia
| |
Collapse
|
19
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, Dos Santos DP, Tang A, Wald C, Slavotinek J. Developing, purchasing, implementing and monitoring AI tools in radiology: practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA. Insights Imaging 2024; 15:16. [PMID: 38246898 PMCID: PMC10800328 DOI: 10.1186/s13244-023-01541-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024] Open
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones.This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools.Key points • The incorporation of artificial intelligence (AI) in radiological practice demands increased monitoring of its utility and safety.• Cooperation between developers, clinicians, and regulators will allow all involved to address ethical issues and monitor AI performance.• AI can fulfil its promise to advance patient well-being if all steps from development to integration in healthcare are rigorously evaluated.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical Center, Birmingham, AL, USA
- American College of Radiology Data Science Institute, Reston, VA, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and Interventional Radiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, CA, USA
- Stanford Center for Artificial Intelligence in Medicine & Imaging, Palo Alto, CA, USA
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning, University of Adelaide, Adelaide, Australia
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Department of Radiology, University Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation Oncology, and Nuclear Medicine, Université de Montréal, Montréal, Québec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital & Medical Center, Burlington, MA, USA
- Tufts University Medical School, Boston, MA, USA
- Commision On Informatics, and Member, Board of Chancellors, American College of Radiology, Virginia, USA
| | - John Slavotinek
- South Australia Medical Imaging, Flinders Medical Centre Adelaide, Adelaide, Australia
- College of Medicine and Public Health, Flinders University, Adelaide, Australia
| |
Collapse
|
20
|
Kang H, Xie W, Wang H, Guo H, Jiang J, Liu Z, Ding X, Li L, Xu W, Zhao J, Bai X, Cui M, Ye H, Wang B, Yang D, Ma X, Liu J, Wang H. Multiparametric MRI-Based Machine Learning Models for the Characterization of Cystic Renal Masses Compared to the Bosniak Classification, Version 2019: A Multicenter Study. Acad Radiol 2024:S1076-6332(24)00003-5. [PMID: 38242731 DOI: 10.1016/j.acra.2024.01.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 12/26/2023] [Accepted: 01/03/2024] [Indexed: 01/21/2024]
Abstract
RATIONALE AND OBJECTIVE Accurate differentiation between benign and malignant cystic renal masses (CRMs) is challenging in clinical practice. This study aimed to develop MRI-based machine learning models for differentiating between benign and malignant CRMs and compare the best-performing model with the Bosniak classification, version 2019 (BC, version 2019). METHODS Between 2009 and 2021, consecutive surgery-proven CRM patients with renal MRI were enrolled in this multicenter study. Models were constructed to differentiate between benign and malignant CRMs using logistic regression (LR), random forest (RF), and support vector machine (SVM) algorithms, respectively. Meanwhile, two radiologists classified CRMs into I-IV categories according to the BC, version 2019 in consensus in the test set. A subgroup analysis was conducted to investigate the performance of the best-performing model in complicated CRMs (II-IV lesions in the test set). The performances of models and BC, version 2019 were evaluated using the area under the receiver operating characteristic curve (AUC). Performance was statistically compared between the best-performing model and the BC, version 2019. RESULTS 278 and 48 patients were assigned to the training and test sets, respectively. In the test set, the AUC and accuracy of the LR model, the RF model, the SVM model, and the BC, version 2019 were 0.884 and 75.0%, 0.907 and 83.3%, 0.814 and 72.9%, and 0.893 and 81.2%, respectively. Neither the AUC nor the accuracy of the RF model that performed best were significantly different from the BC, version 2019 (P = 0.780, P = 0.065). The RF model achieved an AUC and accuracy of 0.880 and 81.0% in complicated CRMs. CONCLUSIONS The MRI-based RF model can accurately differentiate between benign and malignant CRMs with comparable performance to the BC, version 2019, and has good performance in complicated CRMs, which may facilitate treatment decision-making and is less affected by interobserver disagreements.
Collapse
Affiliation(s)
- Huanhuan Kang
- Department of Radiology, First Medical Center of Chinese PLA General Hospital, No.28 Fuxing Road, Haidian District, Beijing 100853, China (H.K., H.G., W.X., J.Z., X.B., M.C., H.Y., H.W.)
| | - Wanfang Xie
- School of Engineering Medicine, Beihang University, Beijing 100191, China (W.X., J.L.); Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of China, Beijing 100191, China (W.X., J.L.)
| | - He Wang
- Radiology Department, Peking University First Hospital, Beijing 100034, China (H.W., Z.L.)
| | - Huiping Guo
- Department of Radiology, First Medical Center of Chinese PLA General Hospital, No.28 Fuxing Road, Haidian District, Beijing 100853, China (H.K., H.G., W.X., J.Z., X.B., M.C., H.Y., H.W.)
| | - Jiahui Jiang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China (J.J., D.Y.)
| | - Zhe Liu
- Radiology Department, Peking University First Hospital, Beijing 100034, China (H.W., Z.L.)
| | - Xiaohui Ding
- Department of Pathology, First Medical Center, Chinese PLA General Hospital, Beijing 100853, China (X.D.)
| | - Lin Li
- Hospital Management Institute, Department of Innovative Medical Research, Chinese PLA General Hospital, Outpatient Building, Beijing 100853, China (L.L.)
| | - Wei Xu
- Department of Radiology, First Medical Center of Chinese PLA General Hospital, No.28 Fuxing Road, Haidian District, Beijing 100853, China (H.K., H.G., W.X., J.Z., X.B., M.C., H.Y., H.W.)
| | - Jian Zhao
- Department of Radiology, First Medical Center of Chinese PLA General Hospital, No.28 Fuxing Road, Haidian District, Beijing 100853, China (H.K., H.G., W.X., J.Z., X.B., M.C., H.Y., H.W.)
| | - Xu Bai
- Department of Radiology, First Medical Center of Chinese PLA General Hospital, No.28 Fuxing Road, Haidian District, Beijing 100853, China (H.K., H.G., W.X., J.Z., X.B., M.C., H.Y., H.W.)
| | - Mengqiu Cui
- Department of Radiology, First Medical Center of Chinese PLA General Hospital, No.28 Fuxing Road, Haidian District, Beijing 100853, China (H.K., H.G., W.X., J.Z., X.B., M.C., H.Y., H.W.)
| | - Huiyi Ye
- Department of Radiology, First Medical Center of Chinese PLA General Hospital, No.28 Fuxing Road, Haidian District, Beijing 100853, China (H.K., H.G., W.X., J.Z., X.B., M.C., H.Y., H.W.)
| | - Baojun Wang
- Department of Urology, Third Medical Center of Chinese PLA General Hospital, Beijing 100039, China (B.W., X.M.)
| | - Dawei Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China (J.J., D.Y.)
| | - Xin Ma
- Department of Urology, Third Medical Center of Chinese PLA General Hospital, Beijing 100039, China (B.W., X.M.)
| | - Jiangang Liu
- School of Engineering Medicine, Beihang University, Beijing 100191, China (W.X., J.L.); Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of China, Beijing 100191, China (W.X., J.L.)
| | - Haiyi Wang
- Department of Radiology, First Medical Center of Chinese PLA General Hospital, No.28 Fuxing Road, Haidian District, Beijing 100853, China (H.K., H.G., W.X., J.Z., X.B., M.C., H.Y., H.W.).
| |
Collapse
|
21
|
Brady AP, Allen B, Chong J, Kotter E, Kottler N, Mongan J, Oakden-Rayner L, dos Santos DP, Tang A, Wald C, Slavotinek J. Developing, Purchasing, Implementing and Monitoring AI Tools in Radiology: Practical Considerations. A Multi-Society Statement from the ACR, CAR, ESR, RANZCR and RSNA. Radiol Artif Intell 2024; 6:e230513. [PMID: 38251899 PMCID: PMC10831521 DOI: 10.1148/ryai.230513] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Abstract
Artificial Intelligence (AI) carries the potential for unprecedented disruption in radiology, with possible positive and negative consequences. The integration of AI in radiology holds the potential to revolutionize healthcare practices by advancing diagnosis, quantification, and management of multiple medical conditions. Nevertheless, the ever-growing availability of AI tools in radiology highlights an increasing need to critically evaluate claims for its utility and to differentiate safe product offerings from potentially harmful, or fundamentally unhelpful ones. This multi-society paper, presenting the views of Radiology Societies in the USA, Canada, Europe, Australia, and New Zealand, defines the potential practical problems and ethical issues surrounding the incorporation of AI into radiological practice. In addition to delineating the main points of concern that developers, regulators, and purchasers of AI tools should consider prior to their introduction into clinical practice, this statement also suggests methods to monitor their stability and safety in clinical use, and their suitability for possible autonomous function. This statement is intended to serve as a useful summary of the practical issues which should be considered by all parties involved in the development of radiology AI resources, and their implementation as clinical tools. This article is simultaneously published in Insights into Imaging (DOI 10.1186/s13244-023-01541-3), Journal of Medical Imaging and Radiation Oncology (DOI 10.1111/1754-9485.13612), Canadian Association of Radiologists Journal (DOI 10.1177/08465371231222229), Journal of the American College of Radiology (DOI 10.1016/j.jacr.2023.12.005), and Radiology: Artificial Intelligence (DOI 10.1148/ryai.230513). Keywords: Artificial Intelligence, Radiology, Automation, Machine Learning Published under a CC BY 4.0 license. ©The Author(s) 2024. Editor's Note: The RSNA Board of Directors has endorsed this article. It has not undergone review or editing by this journal.
Collapse
Affiliation(s)
| | - Bibb Allen
- Department of Radiology, Grandview Medical
Center, Birmingham, AL, USA
- American College of Radiology Data Science
Institute, Reston, VA, USA
| | - Jaron Chong
- Department of Medical Imaging, Schulich
School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Elmar Kotter
- Department of Diagnostic and
Interventional Radiology, Medical Center, Faculty of Medicine, University of
Freiburg, Freiburg, Germany
| | - Nina Kottler
- Radiology Partners, El Segundo, CA,
USA
- Stanford Center for Artificial
Intelligence in Medicine & Imaging, Palo Alto, CA, USA
| | - John Mongan
- Department of Radiology and Biomedical
Imaging, University of California, San Francisco, USA
| | - Lauren Oakden-Rayner
- Australian Institute for Machine Learning,
University of Adelaide, Adelaide, Australia
| | - Daniel Pinto dos Santos
- Department of Radiology, University
Hospital of Cologne, Cologne, Germany
- Department of Radiology, University
Hospital of Frankfurt, Frankfurt, Germany
| | - An Tang
- Department of Radiology, Radiation
Oncology, and Nuclear Medicine, Université de Montréal,
Montréal, Québec, Canada
| | - Christoph Wald
- Department of Radiology, Lahey Hospital
& Medical Center, Burlington, MA, USA
- Tufts University Medical School, Boston,
MA, USA
- Commission On Informatics, and Member,
Board of Chancellors, American College of Radiology, Virginia, USA
| | - John Slavotinek
- South Australia Medical Imaging,
Flinders Medical Centre Adelaide, Adelaide, Australia
- College of Medicine and Public Health,
Flinders University, Adelaide, Australia
| |
Collapse
|
22
|
Guermazi A, Omoumi P, Tordjman M, Fritz J, Kijowski R, Regnard NE, Carrino J, Kahn CE, Knoll F, Rueckert D, Roemer FW, Hayashi D. How AI May Transform Musculoskeletal Imaging. Radiology 2024; 310:e230764. [PMID: 38165245 PMCID: PMC10831478 DOI: 10.1148/radiol.230764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Revised: 06/18/2023] [Accepted: 07/11/2023] [Indexed: 01/03/2024]
Abstract
While musculoskeletal imaging volumes are increasing, there is a relative shortage of subspecialized musculoskeletal radiologists to interpret the studies. Will artificial intelligence (AI) be the solution? For AI to be the solution, the wide implementation of AI-supported data acquisition methods in clinical practice requires establishing trusted and reliable results. This implementation will demand close collaboration between core AI researchers and clinical radiologists. Upon successful clinical implementation, a wide variety of AI-based tools can improve the musculoskeletal radiologist's workflow by triaging imaging examinations, helping with image interpretation, and decreasing the reporting time. Additional AI applications may also be helpful for business, education, and research purposes if successfully integrated into the daily practice of musculoskeletal radiology. The question is not whether AI will replace radiologists, but rather how musculoskeletal radiologists can take advantage of AI to enhance their expert capabilities.
Collapse
Affiliation(s)
- Ali Guermazi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Patrick Omoumi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Mickael Tordjman
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Jan Fritz
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Richard Kijowski
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Nor-Eddine Regnard
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - John Carrino
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Charles E. Kahn
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Florian Knoll
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Daniel Rueckert
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Frank W. Roemer
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Daichi Hayashi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| |
Collapse
|
23
|
Ueda D, Kakinuma T, Fujita S, Kamagata K, Fushimi Y, Ito R, Matsui Y, Nozaki T, Nakaura T, Fujima N, Tatsugami F, Yanagawa M, Hirata K, Yamada A, Tsuboyama T, Kawamura M, Fujioka T, Naganawa S. Fairness of artificial intelligence in healthcare: review and recommendations. Jpn J Radiol 2024; 42:3-15. [PMID: 37540463 PMCID: PMC10764412 DOI: 10.1007/s11604-023-01474-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 07/17/2023] [Indexed: 08/05/2023]
Abstract
In this review, we address the issue of fairness in the clinical integration of artificial intelligence (AI) in the medical field. As the clinical adoption of deep learning algorithms, a subfield of AI, progresses, concerns have arisen regarding the impact of AI biases and discrimination on patient health. This review aims to provide a comprehensive overview of concerns associated with AI fairness; discuss strategies to mitigate AI biases; and emphasize the need for cooperation among physicians, AI researchers, AI developers, policymakers, and patients to ensure equitable AI integration. First, we define and introduce the concept of fairness in AI applications in healthcare and radiology, emphasizing the benefits and challenges of incorporating AI into clinical practice. Next, we delve into concerns regarding fairness in healthcare, addressing the various causes of biases in AI and potential concerns such as misdiagnosis, unequal access to treatment, and ethical considerations. We then outline strategies for addressing fairness, such as the importance of diverse and representative data and algorithm audits. Additionally, we discuss ethical and legal considerations such as data privacy, responsibility, accountability, transparency, and explainability in AI. Finally, we present the Fairness of Artificial Intelligence Recommendations in healthcare (FAIR) statement to offer best practices. Through these efforts, we aim to provide a foundation for discussing the responsible and equitable implementation and deployment of AI in healthcare.
Collapse
Affiliation(s)
- Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-Machi, Abeno-ku, Osaka, 545-8585, Japan.
| | | | - Shohei Fujita
- Department of Radiology, University of Tokyo, Bunkyo-ku, Tokyo, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Sakyoku, Kyoto, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Kita-ku, Okayama, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, Shinjuku-ku, Tokyo, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, Chuo-ku, Kumamoto, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, Minami-ku, Hiroshima, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, Suita City, Osaka, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Kita-ku, Sapporo, Hokkaido, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, Matsumoto, Nagano, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, Suita City, Osaka, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo-ku, Tokyo, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| |
Collapse
|
24
|
Yasaka K, Sato C, Hirakawa H, Fujita N, Kurokawa M, Watanabe Y, Kubo T, Abe O. Impact of deep learning on radiologists and radiology residents in detecting breast cancer on CT: a cross-vendor test study. Clin Radiol 2024; 79:e41-e47. [PMID: 37872026 DOI: 10.1016/j.crad.2023.09.022] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 09/13/2023] [Accepted: 09/29/2023] [Indexed: 10/25/2023]
Abstract
AIM To investigate the effect of deep learning on the diagnostic performance of radiologists and radiology residents in detecting breast cancers on computed tomography (CT). MATERIALS AND METHODS In this retrospective study, patients undergoing contrast-enhanced chest CT between January 2010 and December 2020 using equipment from two vendors were included. Patients with confirmed breast cancer were categorised as the training (n=201) and validation (n=26) group and the testing group (n=30) using processed CT images from either vendor. The trained deep-learning model was applied to test group patients with (30 females; mean age = 59.2 ± 15.8 years) and without (19 males, 21 females; mean age = 64 ± 15.9 years) breast cancer. Image-based diagnostic performance of the deep-learning model was evaluated with the area under the receiver operating characteristic curve (AUC). Two radiologists and three radiology residents were asked to detect malignant lesions by recording a four-point diagnostic confidence score before and after referring to the result from the deep-learning model, and their diagnostic performance was evaluated using jackknife alternative free-response receiver operating characteristic analysis by calculating the figure of merit (FOM). RESULTS The AUCs of the trained deep-learning model on the validation and test data were 0.976 and 0.967, respectively. After referencing with the result of the deep learning model, the FOMs of readers significantly improved (reader 1/2/3/4/5: from 0.933/0.962/0.883/0.944/0.867 to 0.958/0.968/0.917/0.947/0.900; p=0.038). CONCLUSION Deep learning can help radiologists and radiology residents detect breast cancer on CT.
Collapse
Affiliation(s)
- K Yasaka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - C Sato
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - H Hirakawa
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - N Fujita
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - M Kurokawa
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Y Watanabe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - T Kubo
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - O Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
25
|
Yan S, Li J, Wu W. Artificial intelligence in breast cancer: application and future perspectives. J Cancer Res Clin Oncol 2023; 149:16179-16190. [PMID: 37656245 DOI: 10.1007/s00432-023-05337-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 08/24/2023] [Indexed: 09/02/2023]
Abstract
Breast cancer is one of the most common cancers and is one of the leading causes of cancer-related deaths in women worldwide. Early diagnosis and treatment are the key for a favorable prognosis. The application of artificial intelligence technology in the medical field is increasingly extensive, including image analysis, automated diagnosis, intelligent pharmaceutical system, personalized treatment and so on. AI-based breast cancer imaging, pathology and adjuvant therapy technology cannot only reduce the workload of clinicians, but also continuously improve the accuracy and sensitivity of breast cancer diagnosis and treatment. This paper reviews the application of AI in breast cancer, as well as looks ahead and poses challenges to the future development of AI for breast cancer detection and therapeutic, so as to provide ideas for future research.
Collapse
Affiliation(s)
- Shuixin Yan
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Jiadi Li
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Weizhu Wu
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China.
| |
Collapse
|
26
|
Van Den Berghe T, Babin D, Chen M, Callens M, Brack D, Maes H, Lievens J, Lammens M, Van Sumere M, Morbée L, Hautekeete S, Schatteman S, Jacobs T, Thooft WJ, Herregods N, Huysse W, Jaremko JL, Lambert R, Maksymowych W, Laloo F, Baraliakos X, De Craemer AS, Carron P, Van den Bosch F, Elewaut D, Jans L. Neural network algorithm for detection of erosions and ankylosis on CT of the sacroiliac joints: multicentre development and validation of diagnostic accuracy. Eur Radiol 2023; 33:8310-8323. [PMID: 37219619 DOI: 10.1007/s00330-023-09704-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 03/03/2023] [Accepted: 03/25/2023] [Indexed: 05/24/2023]
Abstract
OBJECTIVES To evaluate the feasibility and diagnostic accuracy of a deep learning network for detection of structural lesions of sacroiliitis on multicentre pelvic CT scans. METHODS Pelvic CT scans of 145 patients (81 female, 121 Ghent University/24 Alberta University, 18-87 years old, mean 40 ± 13 years, 2005-2021) with a clinical suspicion of sacroiliitis were retrospectively included. After manual sacroiliac joint (SIJ) segmentation and structural lesion annotation, a U-Net for SIJ segmentation and two separate convolutional neural networks (CNN) for erosion and ankylosis detection were trained. In-training validation and tenfold validation testing (U-Net-n = 10 × 58; CNN-n = 10 × 29) on a test dataset were performed to assess performance on a slice-by-slice and patient level (dice coefficient/accuracy/sensitivity/specificity/positive and negative predictive value/ROC AUC). Patient-level optimisation was applied to increase the performance regarding predefined statistical metrics. Gradient-weighted class activation mapping (Grad-CAM++) heatmap explainability analysis highlighted image parts with statistically important regions for algorithmic decisions. RESULTS Regarding SIJ segmentation, a dice coefficient of 0.75 was obtained in the test dataset. For slice-by-slice structural lesion detection, a sensitivity/specificity/ROC AUC of 95%/89%/0.92 and 93%/91%/0.91 were obtained in the test dataset for erosion and ankylosis detection, respectively. For patient-level lesion detection after pipeline optimisation for predefined statistical metrics, a sensitivity/specificity of 95%/85% and 82%/97% were obtained for erosion and ankylosis detection, respectively. Grad-CAM++ explainability analysis highlighted cortical edges as focus for pipeline decisions. CONCLUSIONS An optimised deep learning pipeline, including an explainability analysis, detects structural lesions of sacroiliitis on pelvic CT scans with excellent statistical performance on a slice-by-slice and patient level. CLINICAL RELEVANCE STATEMENT An optimised deep learning pipeline, including a robust explainability analysis, detects structural lesions of sacroiliitis on pelvic CT scans with excellent statistical metrics on a slice-by-slice and patient level. KEY POINTS • Structural lesions of sacroiliitis can be detected automatically in pelvic CT scans. • Both automatic segmentation and disease detection yield excellent statistical outcome metrics. • The algorithm takes decisions based on cortical edges, rendering an explainable solution.
Collapse
Affiliation(s)
- Thomas Van Den Berghe
- Department of Radiology and Medical Imaging, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium.
| | - Danilo Babin
- Department of Telecommunication and Information Processing - Image Processing and Interpretation (TELIN-IPI), Faculty of Engineering and Architecture, Ghent University - IMEC, Sint-Pietersnieuwstraat 41, 9000, Ghent, Belgium
| | - Min Chen
- Department of Radiology, Peking University Shenzhen Hospital, Shenzhen, 518036, China
| | - Martijn Callens
- Department of Radiology and Medical Imaging, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
| | - Denim Brack
- Department of Radiology and Medical Imaging, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
| | - Helena Maes
- Department of Radiology and Medical Imaging, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
| | - Jan Lievens
- Department of Radiology and Medical Imaging, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
| | - Marie Lammens
- Department of Radiology and Medical Imaging, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
| | - Maxime Van Sumere
- Department of Radiology and Medical Imaging, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
| | - Lieve Morbée
- Department of Radiology and Medical Imaging, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
| | - Simon Hautekeete
- Department of Radiology and Medical Imaging, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
| | - Stijn Schatteman
- Department of Radiology and Medical Imaging, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
| | - Tom Jacobs
- Department of Radiology and Medical Imaging, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
| | - Willem-Jan Thooft
- Department of Radiology and Medical Imaging, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
| | - Nele Herregods
- Department of Radiology and Medical Imaging, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
| | - Wouter Huysse
- Department of Radiology and Medical Imaging, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
| | - Jacob L Jaremko
- Department of Radiology and Diagnostic Imaging and Rheumatology, University of Alberta, 8440 122 Street NW, Edmonton, Alberta, T6G 2B7, Canada
| | - Robert Lambert
- Department of Radiology and Diagnostic Imaging and Rheumatology, University of Alberta, 8440 122 Street NW, Edmonton, Alberta, T6G 2B7, Canada
| | - Walter Maksymowych
- Department of Radiology and Diagnostic Imaging and Rheumatology, University of Alberta, 8440 122 Street NW, Edmonton, Alberta, T6G 2B7, Canada
| | - Frederiek Laloo
- Department of Radiology and Medical Imaging, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
| | - Xenofon Baraliakos
- Rheumazentrum Ruhrgebiet Herne, Ruhr-University Bochum, Claudiusstraße 45, 44649, Herne, Germany
| | - Ann-Sophie De Craemer
- Department of Rheumatology, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
- Vlaams Instituut voor Biotechnologie (VIB) Centre for Inflammation Research (IRC), Ghent University, Technologiepark 927, 9052, Ghent, Belgium
| | - Philippe Carron
- Department of Rheumatology, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
- Vlaams Instituut voor Biotechnologie (VIB) Centre for Inflammation Research (IRC), Ghent University, Technologiepark 927, 9052, Ghent, Belgium
| | - Filip Van den Bosch
- Department of Rheumatology, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
- Vlaams Instituut voor Biotechnologie (VIB) Centre for Inflammation Research (IRC), Ghent University, Technologiepark 927, 9052, Ghent, Belgium
| | - Dirk Elewaut
- Department of Rheumatology, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
- Vlaams Instituut voor Biotechnologie (VIB) Centre for Inflammation Research (IRC), Ghent University, Technologiepark 927, 9052, Ghent, Belgium
| | - Lennart Jans
- Department of Radiology and Medical Imaging, Ghent University Hospital, Corneel Heymanslaan 10, 9000, Ghent, Belgium
| |
Collapse
|
27
|
Krag CH, Müller FC, Gandrup KL, Raaschou H, Andersen MB, Brejnebøl MW, Sagar MV, Bojsen JA, Rasmussen BS, Graumann O, Nielsen M, Kruuse C, Boesen M. Diagnostic test accuracy study of a commercially available deep learning algorithm for ischemic lesion detection on brain MRIs in suspected stroke patients from a non-comprehensive stroke center. Eur J Radiol 2023; 168:111126. [PMID: 37804650 DOI: 10.1016/j.ejrad.2023.111126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 09/25/2023] [Accepted: 09/28/2023] [Indexed: 10/09/2023]
Abstract
PURPOSE To estimate the ability of a commercially available artificial intelligence (AI) tool to detect acute brain ischemia on Magnetic Resonance Imaging (MRI), compared to an experienced neuroradiologist. METHODS We retrospectively included 1030 patients with brain MRI, suspected of stroke from January 6th, 2020 to 1st of April 2022, based on these criteria: Age ≥ 18 years, symptoms within four weeks before the scan. The neuroradiologist reinterpreted the MRI scans and subclassified ischemic lesions for reference. We excluded scans with interpretation difficulties due to artifacts or missing sequences. Four MRI scanner models from the same vendor were used. The first 800 patients were included consecutively, remaining enriched for less frequent lesions. The index test was a CE-approved AI tool (Apollo version 2.1.1 by Cerebriu). RESULTS The final analysis cohort comprised 995 patients (mean age 69 years, 53 % female). A case-based analysis for detecting acute ischemic lesions showed a sensitivity of 89 % (95 % CI: 85 %-91 %) and specificity of 90 % (95 % CI: 87 %-92 %). We found no significant difference in sensitivity or specificity based on sex, age, or comorbidities. Specificity was reduced in cases with DWI artifacts. Multivariate analysis showed that increasing ischemic lesion size and fragmented lesions were independently associated with higher sensitivity, while non-acute lesion ages lowered sensitivity. CONCLUSIONS The AI tool exhibits high sensitivity and specificity in detecting acute ischemic lesions on MRI compared to an experienced neuroradiologist. While sensitivity depends on the ischemic lesions' characteristics, specificity depends on the image quality.
Collapse
Affiliation(s)
- Christian H Krag
- Department of Radiology, Herlev and Gentofte Hospital, Herlev, Denmark; Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark.
| | - Felix C Müller
- Department of Radiology, Herlev and Gentofte Hospital, Herlev, Denmark
| | - Karen L Gandrup
- Department of Radiology, Herlev and Gentofte Hospital, Herlev, Denmark
| | | | - Michael B Andersen
- Department of Radiology, Herlev and Gentofte Hospital, Herlev, Denmark; Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Mathias W Brejnebøl
- Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark; Department of Radiology, Bispebjerg and Frederiksberg Hospital, Frederiksberg, Denmark
| | - Malini V Sagar
- Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark; Department of Neurology, Herlev and Gentofte Hospital, Herlev, Denmark
| | - Jonas A Bojsen
- Department of Radiology, Odense University Hospital, Odense, Denmark
| | | | - Ole Graumann
- Department of Radiology, Odense University Hospital, Odense, Denmark
| | - Mads Nielsen
- Department of Computer Science, University of Copenhagen, Denmark
| | - Christina Kruuse
- Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark; Department of Neurology, Herlev and Gentofte Hospital, Herlev, Denmark
| | - Mikael Boesen
- Faculty of Health Sciences, University of Copenhagen, Copenhagen, Denmark; Department of Radiology, Bispebjerg and Frederiksberg Hospital, Frederiksberg, Denmark
| |
Collapse
|
28
|
Peisen F, Gerken A, Hering A, Dahm I, Nikolaou K, Gatidis S, Eigentler TK, Amaral T, Moltz JH, Othman AE. Can Whole-Body Baseline CT Radiomics Add Information to the Prediction of Best Response, Progression-Free Survival, and Overall Survival of Stage IV Melanoma Patients Receiving First-Line Targeted Therapy: A Retrospective Register Study. Diagnostics (Basel) 2023; 13:3210. [PMID: 37892030 PMCID: PMC10605712 DOI: 10.3390/diagnostics13203210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 10/06/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023] Open
Abstract
BACKGROUND The aim of this study was to investigate whether the combination of radiomics and clinical parameters in a machine-learning model offers additive information compared with the use of only clinical parameters in predicting the best response, progression-free survival after six months, as well as overall survival after six and twelve months in patients with stage IV malignant melanoma undergoing first-line targeted therapy. METHODS A baseline machine-learning model using clinical variables (demographic parameters and tumor markers) was compared with an extended model using clinical variables and radiomic features of the whole tumor burden, utilizing repeated five-fold cross-validation. Baseline CTs of 91 stage IV malignant melanoma patients, all treated in the same university hospital, were identified in the Central Malignant Melanoma Registry and all metastases were volumetrically segmented (n = 4727). RESULTS Compared with the baseline model, the extended radiomics model did not add significantly more information to the best-response prediction (AUC [95% CI] 0.548 (0.188, 0.808) vs. 0.487 (0.139, 0.743)), the prediction of PFS after six months (AUC [95% CI] 0.699 (0.436, 0.958) vs. 0.604 (0.373, 0.867)), or the overall survival prediction after six and twelve months (AUC [95% CI] 0.685 (0.188, 0.967) vs. 0.766 (0.433, 1.000) and AUC [95% CI] 0.554 (0.163, 0.781) vs. 0.616 (0.271, 1.000), respectively). CONCLUSIONS The results showed no additional value of baseline whole-body CT radiomics for best-response prediction, progression-free survival prediction for six months, or six-month and twelve-month overall survival prediction for stage IV melanoma patients receiving first-line targeted therapy. These results need to be validated in a larger cohort.
Collapse
Affiliation(s)
- Felix Peisen
- Department of Diagnostic and Interventional Radiology, Tuebingen University Hospital, Eberhard Karls University, Hoppe-Seyler-Straße 3, 72076 Tuebingen, Germany; (I.D.); (K.N.); (S.G.); (A.E.O.)
| | - Annika Gerken
- Fraunhofer MEVIS, Max-von-Laue-Straße 2, 28359 Bremen, Germany; (A.G.); (A.H.); (J.H.M.)
| | - Alessa Hering
- Fraunhofer MEVIS, Max-von-Laue-Straße 2, 28359 Bremen, Germany; (A.G.); (A.H.); (J.H.M.)
- Diagnostic Image Analysis Group, Radboud University Medical Center (Radboudumc), Geert Grooteplein Zuid 10, 6525 GA Nijmegen, The Netherlands
| | - Isabel Dahm
- Department of Diagnostic and Interventional Radiology, Tuebingen University Hospital, Eberhard Karls University, Hoppe-Seyler-Straße 3, 72076 Tuebingen, Germany; (I.D.); (K.N.); (S.G.); (A.E.O.)
| | - Konstantin Nikolaou
- Department of Diagnostic and Interventional Radiology, Tuebingen University Hospital, Eberhard Karls University, Hoppe-Seyler-Straße 3, 72076 Tuebingen, Germany; (I.D.); (K.N.); (S.G.); (A.E.O.)
- Image-Guided and Functionally Instructed Tumor Therapies (iFIT), The Cluster of Excellence (EXC 2180), 72076 Tuebingen, Germany
| | - Sergios Gatidis
- Department of Diagnostic and Interventional Radiology, Tuebingen University Hospital, Eberhard Karls University, Hoppe-Seyler-Straße 3, 72076 Tuebingen, Germany; (I.D.); (K.N.); (S.G.); (A.E.O.)
- Max Planck Institute for Intelligent Systems, Max-Planck-Ring 4, 72076 Tuebingen, Germany
| | - Thomas K. Eigentler
- Center of Dermato-Oncology, Department of Dermatology, Tuebingen University Hospital, Eberhard Karls University, Liebermeisterstraße 25, 72076 Tuebingen, Germany; (T.K.E.); (T.A.)
- Department of Dermatology, Venereology and Allergology, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humbolt-Universität zu Berlin, Luisenstraße 2, 10117 Berlin, Germany
| | - Teresa Amaral
- Center of Dermato-Oncology, Department of Dermatology, Tuebingen University Hospital, Eberhard Karls University, Liebermeisterstraße 25, 72076 Tuebingen, Germany; (T.K.E.); (T.A.)
| | - Jan H. Moltz
- Fraunhofer MEVIS, Max-von-Laue-Straße 2, 28359 Bremen, Germany; (A.G.); (A.H.); (J.H.M.)
| | - Ahmed E. Othman
- Department of Diagnostic and Interventional Radiology, Tuebingen University Hospital, Eberhard Karls University, Hoppe-Seyler-Straße 3, 72076 Tuebingen, Germany; (I.D.); (K.N.); (S.G.); (A.E.O.)
- Institute of Neuroradiology, Johannes Gutenberg University Hospital Mainz, Langenbeckstraße 1, 55131 Mainz, Germany
| |
Collapse
|
29
|
Noordman CR, Yakar D, Bosma J, Simonis FFJ, Huisman H. Complexities of deep learning-based undersampled MR image reconstruction. Eur Radiol Exp 2023; 7:58. [PMID: 37789241 PMCID: PMC10547669 DOI: 10.1186/s41747-023-00372-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 08/01/2023] [Indexed: 10/05/2023] Open
Abstract
Artificial intelligence has opened a new path of innovation in magnetic resonance (MR) image reconstruction of undersampled k-space acquisitions. This review offers readers an analysis of the current deep learning-based MR image reconstruction methods. The literature in this field shows exponential growth, both in volume and complexity, as the capabilities of machine learning in solving inverse problems such as image reconstruction are explored. We review the latest developments, aiming to assist researchers and radiologists who are developing new methods or seeking to provide valuable feedback. We shed light on key concepts by exploring the technical intricacies of MR image reconstruction, highlighting the importance of raw datasets and the difficulty of evaluating diagnostic value using standard metrics.Relevance statement Increasingly complex algorithms output reconstructed images that are difficult to assess for robustness and diagnostic quality, necessitating high-quality datasets and collaboration with radiologists.Key points• Deep learning-based image reconstruction algorithms are increasing both in complexity and performance.• The evaluation of reconstructed images may mistake perceived image quality for diagnostic value.• Collaboration with radiologists is crucial for advancing deep learning technology.
Collapse
Affiliation(s)
- Constant Richard Noordman
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands.
| | - Derya Yakar
- Medical Imaging Center, Departments of Radiology, Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen, 9700 RB, The Netherlands
| | - Joeran Bosma
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
| | | | - Henkjan Huisman
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, 7030, Norway
| |
Collapse
|
30
|
Bandyopadhyay A, Bae C, Cheng H, Chiang A, Deak M, Seixas A, Singh J. Smart sleep: what to consider when adopting AI-enabled solutions in clinical practice of sleep medicine. J Clin Sleep Med 2023; 19:1823-1833. [PMID: 37394867 PMCID: PMC10545999 DOI: 10.5664/jcsm.10702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 06/19/2023] [Accepted: 06/22/2023] [Indexed: 07/04/2023]
Abstract
Since the publication of its 2020 position statement on artificial intelligence (AI) in sleep medicine by the American Academy of Sleep Medicine, there has been a tremendous expansion of AI-related software and hardware options for sleep clinicians. To help clinicians understand the current state of AI and sleep medicine, and to further enable these solutions to be adopted into clinical practice, a discussion panel was conducted on June 7, 2022, at the Associated Professional Sleep Societies Sleep Conference in Charlotte, North Carolina. The article is a summary of key discussion points from this session, including aspects of considerations for the clinician in evaluating AI-enabled solutions including but not limited to what steps might be taken both by the Food and Drug Administration and clinicians to protect patients, logistical issues, technical challenges, billing and compliance considerations, education and training considerations, and other unique challenges specific to AI-enabled solutions. Our summary of this session is meant to support clinicians in efforts to assist in the clinical care of patients with sleep disorders utilizing AI-enabled solutions. CITATION Bandyopadhyay A, Bae C, Cheng H, et al. Smart sleep: what to consider when adopting AI-enabled solutions in clinical practice of sleep medicine. J Clin Sleep Med. 2023;19(10):1823-1833.
Collapse
Affiliation(s)
- Anuja Bandyopadhyay
- Department of Pediatrics, Indiana University School of Medicine, Indianapolis, Indiana
| | - Charles Bae
- Division of Sleep Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania
| | - Hao Cheng
- Department of Pulmonary and Sleep Medicine, Miami VA Healthcare System, Miami, Florida
| | - Ambrose Chiang
- Louis Stokes Cleveland VA Medical Center, Case Western Reserve University, Cleveland, Ohio
| | | | - Azizi Seixas
- Department of Informatics and Health Data Science, University of Miami Miller School of Medicine, Coral Gables, Florida
| | - Jaspal Singh
- Atrium Health Department of Medicine, Wake Forest School of Medicine, Charlotte, North Carolina
| |
Collapse
|
31
|
Najafi A, Cazzato RL, Meyer BC, Pereira PL, Alberich A, López A, Ronot M, Fritz J, Maas M, Benson S, Haage P, Gomez Munoz F. CIRSE Position Paper on Artificial Intelligence in Interventional Radiology. Cardiovasc Intervent Radiol 2023; 46:1303-1307. [PMID: 37668690 DOI: 10.1007/s00270-023-03521-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 07/21/2023] [Indexed: 09/06/2023]
Abstract
Artificial intelligence (AI) has made tremendous advances in recent years and will presumably have a major impact in health care. These advancements are expected to affect different aspects of clinical medicine and lead to improvement of delivered care but also optimization of available resources. As a modern specialty that extensively relies on imaging, interventional radiology (IR) is primed to be on the forefront of this development. This is especially relevant since IR is a highly advanced specialty that heavily relies on technology and thus is naturally susceptible to disruption by new technological developments. Disruption always means opportunity and interventionalists must therefore understand AI and be a central part of decision-making when such systems are developed, trained, and implemented. Furthermore, interventional radiologist must not only embrace but lead the change that AI technology will allow. The CIRSE position paper discusses the status quo as well as current developments and challenges.
Collapse
Affiliation(s)
- Arash Najafi
- Department of Radiology and Nuclear Medicine, Institut für Radiologie und Nuklearmedizin, Kantonsspital Winterthur, Brauerstrasse 15, 8401, Winterthur, Switzerland.
| | - Roberto Luigi Cazzato
- Department of Interventional Radiology, University Hospital of Strasbourg, Strasbourg, France
| | - Bernhard C Meyer
- Department of Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
| | - Philippe L Pereira
- Center of Radiology, Minimally Invasive Therapies and Nuclear Medicine, SLK-Kliniken GmbH, Academic Hospital of Ruprecht-Karls-University, Heidelberg, Germany
- APL Prof. Faculty of Eberhards-Karls-University, Tübingen, Germany
- Faculty of Danube Private University, Krems, Austria
| | - Angel Alberich
- Quantitative Imaging Biomarkers in Medicine, Quibim SL, Valencia, Spain
| | - Antonio López
- Medical Informatics and Radiology Department, Hospital Clinic de Barcelona, Barcelona, Spain
| | - Maxime Ronot
- Université Paris Cité, CRI, Paris, France
- Service de Radiologie, Hôpital Beaujon APHP Nord, Clichy, France
| | - Jan Fritz
- Department of Radiology, NYU Grossman School of Medicine, New York, USA
| | - Monique Maas
- Antoni van Leeuwenhoek-Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Sean Benson
- Antoni van Leeuwenhoek-Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Patrick Haage
- Zentrum für Radiologie, HELIOS Universitätsklinikum Wuppertal, Wuppertal, Germany
| | - Fernando Gomez Munoz
- Antoni van Leeuwenhoek-Netherlands Cancer Institute, Amsterdam, The Netherlands
- Hospital Universitari i Politecnic La Fe, Valencia, Spain
| |
Collapse
|
32
|
Kocak B, Yardimci AH, Yuzkan S, Keles A, Altun O, Bulut E, Bayrak ON, Okumus AA. Transparency in Artificial Intelligence Research: a Systematic Review of Availability Items Related to Open Science in Radiology and Nuclear Medicine. Acad Radiol 2023; 30:2254-2266. [PMID: 36526532 DOI: 10.1016/j.acra.2022.11.030] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 11/21/2022] [Accepted: 11/22/2022] [Indexed: 12/15/2022]
Abstract
RATIONALE AND OBJECTIVES Reproducibility of artificial intelligence (AI) research has become a growing concern. One of the fundamental reasons is the lack of transparency in data, code, and model. In this work, we aimed to systematically review the radiology and nuclear medicine papers on AI in terms of transparency and open science. MATERIALS AND METHODS A systematic literature search was performed in PubMed to identify original research studies on AI. The search was restricted to studies published in Q1 and Q2 journals that are also indexed on the Web of Science. A random sampling of the literature was performed. Besides six baseline study characteristics, a total of five availability items were evaluated. Two groups of independent readers including eight readers participated in the study. Inter-rater agreement was analyzed. Disagreements were resolved with consensus. RESULTS Following eligibility criteria, we included a final set of 194 papers. The raw data was available in about one-fifth of the papers (34/194; 18%). However, the authors made their private data available only in one paper (1/161; 1%). About one-tenth of the papers made their pre-modeling (25/194; 13%), modeling (28/194; 14%), or post-modeling files (15/194; 8%) available. Most of the papers (189/194; 97%) did not attempt to create a ready-to-use system for real-world usage. Data origin, use of deep learning, and external validation had statistically significantly different distributions. The use of private data alone was negatively associated with the availability of at least one item (p<0.001). CONCLUSION Overall rates of availability for items were poor, leaving room for substantial improvement.
Collapse
Affiliation(s)
- Burak Kocak
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey.
| | - Aytul Hande Yardimci
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey
| | - Sabahattin Yuzkan
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey
| | - Ali Keles
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey
| | - Omer Altun
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey
| | - Elif Bulut
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey
| | - Osman Nuri Bayrak
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey
| | - Ahmet Arda Okumus
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey
| |
Collapse
|
33
|
Armato SG, Drukker K, Hadjiiski L. AI in medical imaging grand challenges: translation from competition to research benefit and patient care. Br J Radiol 2023; 96:20221152. [PMID: 37698542 PMCID: PMC10546459 DOI: 10.1259/bjr.20221152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 05/24/2023] [Accepted: 07/11/2023] [Indexed: 09/13/2023] Open
Abstract
Artificial intelligence (AI), in one form or another, has been a part of medical imaging for decades. The recent evolution of AI into approaches such as deep learning has dramatically accelerated the application of AI across a wide range of radiologic settings. Despite the promises of AI, developers and users of AI technology must be fully aware of its potential biases and pitfalls, and this knowledge must be incorporated throughout the AI system development pipeline that involves training, validation, and testing. Grand challenges offer an opportunity to advance the development of AI methods for targeted applications and provide a mechanism for both directing and facilitating the development of AI systems. In the process, a grand challenge centralizes (with the challenge organizers) the burden of providing a valid benchmark test set to assess performance and generalizability of participants' models and the collection and curation of image metadata, clinical/demographic information, and the required reference standard. The most relevant grand challenges are those designed to maximize the open-science nature of the competition, with code and trained models deposited for future public access. The ultimate goal of AI grand challenges is to foster the translation of AI systems from competition to research benefit and patient care. Rather than reference the many medical imaging grand challenges that have been organized by groups such as MICCAI, RSNA, AAPM, and grand-challenge.org, this review assesses the role of grand challenges in promoting AI technologies for research advancement and for eventual clinical implementation, including their promises and limitations.
Collapse
Affiliation(s)
- Samuel G Armato
- Department of Radiology, The University of Chicago, Chicago, Illinois, USA
| | - Karen Drukker
- Department of Radiology, The University of Chicago, Chicago, Illinois, USA
| | - Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| |
Collapse
|
34
|
Nurmaini S, Sapitri AI, Tutuko B, Rachmatullah MN, Rini DP, Darmawahyuni A, Firdaus F, Mandala S, Nova R, Bernolian N. Automatic echocardiographic anomalies interpretation using a stacked residual-dense network model. BMC Bioinformatics 2023; 24:365. [PMID: 37759158 PMCID: PMC10536702 DOI: 10.1186/s12859-023-05493-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 09/21/2023] [Indexed: 09/29/2023] Open
Abstract
Echocardiographic interpretation during the prenatal or postnatal period is important for diagnosing cardiac septal abnormalities. However, manual interpretation can be time consuming and subject to human error. Automatic segmentation of echocardiogram can support cardiologists in making an initial interpretation. However, such a process does not always provide straightforward information to make a complete interpretation. The segmentation process only identifies the region of cardiac septal abnormality, whereas complete interpretation should determine based on the position of defect. In this study, we proposed a stacked residual-dense network model to segment the entire region of cardiac and classifying their defect positions to generate automatic echocardiographic interpretation. We proposed the generalization model with incorporated two modalities: prenatal and postnatal echocardiography. To further evaluate the effectiveness of our model, its performance was verified by five cardiologists. We develop a pipeline process using 1345 echocardiograms for training data and 181 echocardiograms for unseen data from prospective patients acquired during standard clinical practice at Muhammad Hoesin General Hospital in Indonesia. As a result, the proposed model produced of 58.17% intersection over union (IoU), 75.75% dice similarity coefficient (DSC), and 76.36% mean average precision (mAP) for the validation data. Using unseen data, we achieved 42.39% IoU, 55.72% DSC, and 51.04% mAP. Further, the classification of defect positions using unseen data had approximately 92.27% accuracy, 94.33% specificity, and 92.05% sensitivity. Finally, our proposed model is validated with human expert with varying Kappa value. On average, these results hold promise of increasing suitability in clinical practice as a supporting diagnostic tool for establishing the diagnosis.
Collapse
Affiliation(s)
- Siti Nurmaini
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang, 30139, Indonesia.
| | - Ade Iriani Sapitri
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang, 30139, Indonesia
- Doctoral Program, Faculty of Engineering, Universitas Sriwijaya, Palembang, Indonesia
| | - Bambang Tutuko
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang, 30139, Indonesia
| | - Muhammad Naufal Rachmatullah
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang, 30139, Indonesia
| | - Dian Palupi Rini
- Department of Informatic Engineering, Faculty of Computer Science, Universitas Sriwijaya, Palembang, Indonesia
| | - Annisa Darmawahyuni
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang, 30139, Indonesia
| | - Firdaus Firdaus
- Intelligent System Research Group, Faculty of Computer Science, Universitas Sriwijaya, Palembang, 30139, Indonesia
| | - Satria Mandala
- Human Centric Engineering, School of Computing, Telkom University, Bandung, Indonesia
| | - Ria Nova
- Division of Pediatric Cardiology, Department of Child Health, Mohammad Hoesin General Hospital, Palembang, Indonesia
| | - Nuswil Bernolian
- Division of Fetomaternal, Department of Obstetrics and Gynaecology, Mohammad Hoesin General Hospital, Palembang, Indonesia
| |
Collapse
|
35
|
Kuwabara M, Ikawa F, Sakamoto S, Okazaki T, Ishii D, Hosogai M, Maeda Y, Chiku M, Kitamura N, Choppin A, Takamiya D, Shimahara Y, Nakayama T, Kurisu K, Horie N. Effectiveness of tuning an artificial intelligence algorithm for cerebral aneurysm diagnosis: a study of 10,000 consecutive cases. Sci Rep 2023; 13:16202. [PMID: 37758849 PMCID: PMC10533861 DOI: 10.1038/s41598-023-43418-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 09/23/2023] [Indexed: 09/29/2023] Open
Abstract
Diagnostic image analysis for unruptured cerebral aneurysms using artificial intelligence has a very high sensitivity. However, further improvement is needed because of a relatively high number of false positives. This study aimed to confirm the clinical utility of tuning an artificial intelligence algorithm for cerebral aneurysm diagnosis. We extracted 10,000 magnetic resonance imaging scans of participants who underwent brain screening using the "Brain Dock" system. The sensitivity and false positives/case for aneurysm detection were compared before and after tuning the algorithm. The initial diagnosis included only cases for which feedback to the algorithm was provided. In the primary analysis, the sensitivity of aneurysm diagnosis decreased from 96.5 to 90% and the false positives/case improved from 2.06 to 0.99 after tuning the algorithm (P < 0.001). In the secondary analysis, the sensitivity of aneurysm diagnosis decreased from 98.8 to 94.6% and the false positives/case improved from 1.99 to 1.03 after tuning the algorithm (P < 0.001). The false positives/case reduced without a significant decrease in sensitivity. Using large clinical datasets, we demonstrated that by tuning the algorithm, we could significantly reduce false positives with a minimal decline in sensitivity.
Collapse
Affiliation(s)
- Masashi Kuwabara
- Department of Neurosurgery, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, Hiroshima, 734-8551, Japan
| | - Fusao Ikawa
- Department of Neurosurgery, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, Hiroshima, 734-8551, Japan.
- Department of Neurosurgery, Shimane Prefectural Central Hospital, 4-1-1 Himebara, Izumo, Shimane, 693-8555, Japan.
| | - Shigeyuki Sakamoto
- Department of Neurosurgery, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, Hiroshima, 734-8551, Japan
| | - Takahito Okazaki
- Department of Neurosurgery, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, Hiroshima, 734-8551, Japan
| | - Daizo Ishii
- Department of Neurosurgery, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, Hiroshima, 734-8551, Japan
| | - Masahiro Hosogai
- Department of Neurosurgery, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, Hiroshima, 734-8551, Japan
| | - Yuyo Maeda
- Department of Neurosurgery, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, Hiroshima, 734-8551, Japan
| | - Masaaki Chiku
- Department of Neurosurgery, Medical Check Studio, Tokyo Ginza Clinic, 1-2-4 Ginza, Chuo-ku, Tokyo, 104-0061, Japan
| | - Naoyuki Kitamura
- Department of Diagnostic Radiology, Kasumi Clinic, 1-2-27 Shinonomehommachi, Minami-ku, Hiroshima, Hiroshima, 734-0023, Japan
| | - Antoine Choppin
- LPIXEL Inc., 1-6-1 Otemachi, Chiyoda-ku, Tokyo, 100-0004, Japan
| | | | - Yuki Shimahara
- LPIXEL Inc., 1-6-1 Otemachi, Chiyoda-ku, Tokyo, 100-0004, Japan
| | - Takeo Nakayama
- Department of Health Informatics, School of Public Health, Graduate School of Medicine, Kyoto University, Yoshida-Konoe, Sakyo-ku, Kyoto, Kyoto, 606-8501, Japan
| | - Kaoru Kurisu
- Chugoku Rosai Hospital, 1-5-1 Hirotagaya, Kure, Hiroshima, 737-0193, Japan
| | - Nobutaka Horie
- Department of Neurosurgery, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, Hiroshima, 734-8551, Japan
| |
Collapse
|
36
|
Reeve K, On BI, Havla J, Burns J, Gosteli-Peter MA, Alabsawi A, Alayash Z, Götschi A, Seibold H, Mansmann U, Held U. Prognostic models for predicting clinical disease progression, worsening and activity in people with multiple sclerosis. Cochrane Database Syst Rev 2023; 9:CD013606. [PMID: 37681561 PMCID: PMC10486189 DOI: 10.1002/14651858.cd013606.pub2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
BACKGROUND Multiple sclerosis (MS) is a chronic inflammatory disease of the central nervous system that affects millions of people worldwide. The disease course varies greatly across individuals and many disease-modifying treatments with different safety and efficacy profiles have been developed recently. Prognostic models evaluated and shown to be valid in different settings have the potential to support people with MS and their physicians during the decision-making process for treatment or disease/life management, allow stratified and more precise interpretation of interventional trials, and provide insights into disease mechanisms. Many researchers have turned to prognostic models to help predict clinical outcomes in people with MS; however, to our knowledge, no widely accepted prognostic model for MS is being used in clinical practice yet. OBJECTIVES To identify and summarise multivariable prognostic models, and their validation studies for quantifying the risk of clinical disease progression, worsening, and activity in adults with MS. SEARCH METHODS We searched MEDLINE, Embase, and the Cochrane Database of Systematic Reviews from January 1996 until July 2021. We also screened the reference lists of included studies and relevant reviews, and references citing the included studies. SELECTION CRITERIA We included all statistically developed multivariable prognostic models aiming to predict clinical disease progression, worsening, and activity, as measured by disability, relapse, conversion to definite MS, conversion to progressive MS, or a composite of these in adult individuals with MS. We also included any studies evaluating the performance of (i.e. validating) these models. There were no restrictions based on language, data source, timing of prognostication, or timing of outcome. DATA COLLECTION AND ANALYSIS Pairs of review authors independently screened titles/abstracts and full texts, extracted data using a piloted form based on the Checklist for Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modelling Studies (CHARMS), assessed risk of bias using the Prediction Model Risk Of Bias Assessment Tool (PROBAST), and assessed reporting deficiencies based on the checklist items in Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD). The characteristics of the included models and their validations are described narratively. We planned to meta-analyse the discrimination and calibration of models with at least three external validations outside the model development study but no model met this criterion. We summarised between-study heterogeneity narratively but again could not perform the planned meta-regression. MAIN RESULTS We included 57 studies, from which we identified 75 model developments, 15 external validations corresponding to only 12 (16%) of the models, and six author-reported validations. Only two models were externally validated multiple times. None of the identified external validations were performed by researchers independent of those that developed the model. The outcome was related to disease progression in 39 (41%), relapses in 8 (8%), conversion to definite MS in 17 (18%), and conversion to progressive MS in 27 (28%) of the 96 models or validations. The disease and treatment-related characteristics of included participants, and definitions of considered predictors and outcome, were highly heterogeneous amongst the studies. Based on the publication year, we observed an increase in the percent of participants on treatment, diversification of the diagnostic criteria used, an increase in consideration of biomarkers or treatment as predictors, and increased use of machine learning methods over time. Usability and reproducibility All identified models contained at least one predictor requiring the skills of a medical specialist for measurement or assessment. Most of the models (44; 59%) contained predictors that require specialist equipment likely to be absent from primary care or standard hospital settings. Over half (52%) of the developed models were not accompanied by model coefficients, tools, or instructions, which hinders their application, independent validation or reproduction. The data used in model developments were made publicly available or reported to be available on request only in a few studies (two and six, respectively). Risk of bias We rated all but one of the model developments or validations as having high overall risk of bias. The main reason for this was the statistical methods used for the development or evaluation of prognostic models; we rated all but two of the included model developments or validations as having high risk of bias in the analysis domain. None of the model developments that were externally validated or these models' external validations had low risk of bias. There were concerns related to applicability of the models to our research question in over one-third (38%) of the models or their validations. Reporting deficiencies Reporting was poor overall and there was no observable increase in the quality of reporting over time. The items that were unclearly reported or not reported at all for most of the included models or validations were related to sample size justification, blinding of outcome assessors, details of the full model or how to obtain predictions from it, amount of missing data, and treatments received by the participants. Reporting of preferred model performance measures of discrimination and calibration was suboptimal. AUTHORS' CONCLUSIONS The current evidence is not sufficient for recommending the use of any of the published prognostic prediction models for people with MS in clinical routine today due to lack of independent external validations. The MS prognostic research community should adhere to the current reporting and methodological guidelines and conduct many more state-of-the-art external validation studies for the existing or newly developed models.
Collapse
Affiliation(s)
- Kelly Reeve
- Epidemiology, Biostatistics and Prevention Institute, University of Zürich, Zurich, Switzerland
| | - Begum Irmak On
- Institute for Medical Information Processing, Biometry and Epidemiology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Joachim Havla
- lnstitute of Clinical Neuroimmunology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Jacob Burns
- Institute for Medical Information Processing, Biometry and Epidemiology, Ludwig-Maximilians-Universität München, Munich, Germany
- Pettenkofer School of Public Health, Munich, Germany
| | | | - Albraa Alabsawi
- Institute for Medical Information Processing, Biometry and Epidemiology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Zoheir Alayash
- Institute for Medical Information Processing, Biometry and Epidemiology, Ludwig-Maximilians-Universität München, Munich, Germany
- Institute of Health Services Research in Dentistry, University of Münster, Muenster, Germany
| | - Andrea Götschi
- Epidemiology, Biostatistics and Prevention Institute, University of Zürich, Zurich, Switzerland
| | | | - Ulrich Mansmann
- Institute for Medical Information Processing, Biometry and Epidemiology, Ludwig-Maximilians-Universität München, Munich, Germany
- Pettenkofer School of Public Health, Munich, Germany
| | - Ulrike Held
- Epidemiology, Biostatistics and Prevention Institute, University of Zürich, Zurich, Switzerland
| |
Collapse
|
37
|
Mitsuyama Y, Matsumoto T, Tatekawa H, Walston SL, Kimura T, Yamamoto A, Watanabe T, Miki Y, Ueda D. Chest radiography as a biomarker of ageing: artificial intelligence-based, multi-institutional model development and validation in Japan. THE LANCET. HEALTHY LONGEVITY 2023; 4:e478-e486. [PMID: 37597530 DOI: 10.1016/s2666-7568(23)00133-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Revised: 07/04/2023] [Accepted: 07/05/2023] [Indexed: 08/21/2023] Open
Abstract
BACKGROUND Chest radiographs are widely available and cost-effective; however, their usefulness as a biomarker of ageing using multi-institutional data remains underexplored. The aim of this study was to develop a biomarker of ageing from chest radiography and examine the correlation between the biomarker and diseases. METHODS In this retrospective, multi-institutional study, we trained, tuned, and externally tested an artificial intelligence (AI) model to estimate the age of healthy individuals using chest radiographs as a biomarker. For the biomarker modelling phase of the study, we used healthy chest radiographs consecutively collected between May 22, 2008, and Dec 28, 2021, from three institutions in Japan. Data from two institutions were used for training, tuning, and internal testing, and data from the third institution were used for external testing. To evaluate the performance of the AI model in estimating ages, we calculated the correlation coefficient, mean square error, root mean square error, and mean absolute error. The correlation investigation phase of the study included chest radiographs from individuals with a known disease that were consecutively collected between Jan 1, 2018, and Dec 31, 2021, from an additional two institutions in Japan. We investigated the odds ratios (ORs) for various diseases given the difference between the AI-estimated age and chronological age (ie, the difference-age). FINDINGS We included 101 296 chest radiographs from 70 248 participants across five institutions. In the biomarker modelling phase, the external test dataset from 3467 healthy participants included 8046 radiographs. Between the AI-estimated age and chronological age, the correlation coefficient was 0·95 (99% CI 0·95-0·95), the mean square error was 15·0 years (99% CI 14·0-15·0), the root mean square error was 3·8 years (99% CI 3·8-3·9), and the mean absolute error was 3·0 years (99% CI 3·0-3·1). In the correlation investigation phase, the external test datasets from 34 197 participants with a known disease included 34 197 radiographs. The ORs for difference-age were as follows: 1·04 (99% CI 1·04-1·05) for hypertension; 1·02 (1·01-1·03) for hyperuricaemia; 1·05 (1·03-1·06) for chronic obstructive pulmonary disease; 1·08 (1·06-1·09) for interstitial lung disease; 1·05 (1·03-1·06) for chronic renal failure; 1·04 (1·03-1·06) for atrial fibrillation; 1·03 (1·02-1·04) for osteoporosis; and 1·05 (1·03-1·06) for liver cirrhosis. INTERPRETATION The AI-estimated age using chest radiographs showed a strong correlation with chronological age in the healthy cohorts. Furthermore, in cohorts of individuals with known diseases, the difference between estimated age and chronological age correlated with various chronic diseases. The use of this biomarker might pave the way for enhanced risk stratification methodologies, individualised therapeutic interventions, and innovative early diagnostic and preventive approaches towards age-associated pathologies. FUNDING None. TRANSLATION For the Japanese translation of the abstract see Supplementary Materials section.
Collapse
Affiliation(s)
- Yasuhito Mitsuyama
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Toshimasa Matsumoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan; Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan
| | - Hiroyuki Tatekawa
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Shannon L Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Tatsuo Kimura
- Department of Premier Preventive Medicine, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Akira Yamamoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Toshio Watanabe
- Department of Premier Preventive Medicine, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan; Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan.
| |
Collapse
|
38
|
Murad M, Tamimi F. Artificial intelligence: is it more accurate than endodontists in root canal therapy? Evid Based Dent 2023; 24:106-107. [PMID: 37221364 DOI: 10.1038/s41432-023-00901-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 04/06/2023] [Indexed: 05/25/2023]
Abstract
DATA SOURCES The following databases were electronically searched (up to 20 March 2022): PubMed, Scopus, Google Scholar, and Cochrane Library. This was followed by hand-searching the reference lists of the included articles. The search was restricted to articles published in English. The aim of this study was to evaluate the effectiveness of artificial intelligence in identifying, analyzing, and interpreting radiographic features related to endodontic therapy. STUDY SELECTION The selection criteria were limited to trials evaluating the effectiveness of artificial intelligence in identifying, analyzing, and interpreting radiographic features related to endodontic therapy. TYPES OF STUDIES Clinical, ex-vivo, and in-vitro trials. TYPES OF RADIOGRAPHIC IMAGES Two-dimensional intra-oral imaging (bitewings and/or periapicals), panoramic radiographs (PRs), and cone beam computed tomography (CBCT). EXCLUSION CRITERIA 1) Case reports, letters, and commentaries; 2) Reviews, conferences, and books; 3) Inaccessible reports. DATA EXTRACTION AND SYNTHESIS The titles and abstracts of the results of the searches were screened by two authors against the inclusion criteria. The full text of any potentially relevant abstract and title were retrieved for more comprehensive assessment. The risk of bias was assessed initially by two examiners and then by two authors. Any discrepancies were resolved through discussion and consensus. RESULTS Out of the 1131 articles which were identified in the initial search, 30 were considered relevant, and only 24 articles were eventually included. The exclusion of the six articles was related to the absence of appropriate clinical or radiological data. Meta-analysis was not performed due to high heterogeneity. Various degrees of bias were detected in more than 58% of the included studies. CONCLUSIONS Although most of the included studies were biased, the authors concluded that the use of artificial intelligence can be an effective alternative in identifying, analyzing and interpreting radiographic features related to root canal therapy.
Collapse
Affiliation(s)
- Mohammed Murad
- Clinical MSc in Endodontics, University of Manchester, Manchester, UK.
- Clinical MSc in Prosthetic Dentistry, University of Bristol, Bristol, UK.
- Division of Clinical Dentistry, The Primary Health Care Corporation, P.O. Box: 26555, Doha, Qatar.
| | - Faleh Tamimi
- College of Dental Medicine, QU Health, Qatar University, Doha, Qatar
| |
Collapse
|
39
|
Park SH, Sul AR, Ko Y, Jang HY, Lee JG. Radiologist's Guide to Evaluating Publications of Clinical Research on AI: How We Do It. Radiology 2023; 308:e230288. [PMID: 37750772 DOI: 10.1148/radiol.230288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/27/2023]
Abstract
Literacy in research studies of artificial intelligence (AI) has become an important skill for radiologists. It is required to make a proper assessment of the validity, reproducibility, and clinical applicability of AI studies. However, AI studies are generally perceived to be more difficult for clinician readers to evaluate than traditional clinical research studies. This special report-as an effective, concise guide for readers-aims to assist clinical radiologists in critically evaluating different types of clinical research articles involving AI. It does not intend to be a comprehensive checklist or methodological summary for complete clinical evaluation of AI or a reporting guideline. Ten key items for readers to check are described, regarding study purpose, function and clinical context of AI, training data, data preprocessing, AI modeling techniques, test data, AI performance, helpfulness and value of AI, interpretability of AI, and code sharing. The important aspects of each item are explained for readers to consider when reading publications on AI clinical research. Evaluating each item can help radiologists assess the validity, reproducibility, and clinical applicability of clinical research articles involving AI.
Collapse
Affiliation(s)
- Seong Ho Park
- From the Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea (S.H.P., Y.K., H.Y.J.); Division of Healthcare Research Outcomes Research, National Evidence-based Healthcare Collaborating Agency, Seoul, South Korea (A.R.S.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - Ah-Ram Sul
- From the Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea (S.H.P., Y.K., H.Y.J.); Division of Healthcare Research Outcomes Research, National Evidence-based Healthcare Collaborating Agency, Seoul, South Korea (A.R.S.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - Yousun Ko
- From the Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea (S.H.P., Y.K., H.Y.J.); Division of Healthcare Research Outcomes Research, National Evidence-based Healthcare Collaborating Agency, Seoul, South Korea (A.R.S.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - Hye Young Jang
- From the Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea (S.H.P., Y.K., H.Y.J.); Division of Healthcare Research Outcomes Research, National Evidence-based Healthcare Collaborating Agency, Seoul, South Korea (A.R.S.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - June-Goo Lee
- From the Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea (S.H.P., Y.K., H.Y.J.); Division of Healthcare Research Outcomes Research, National Evidence-based Healthcare Collaborating Agency, Seoul, South Korea (A.R.S.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| |
Collapse
|
40
|
Altukroni A, Alsaeedi A, Gonzalez-Losada C, Lee JH, Alabudh M, Mirah M, El-Amri S, Ezz El-Deen O. Detection of the pathological exposure of pulp using an artificial intelligence tool: a multicentric study over periapical radiographs. BMC Oral Health 2023; 23:553. [PMID: 37563659 PMCID: PMC10416487 DOI: 10.1186/s12903-023-03251-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Accepted: 07/25/2023] [Indexed: 08/12/2023] Open
Abstract
BACKGROUND Introducing artificial intelligence (AI) into the medical field proved beneficial in automating tasks and streamlining the practitioners' lives. Hence, this study was conducted to design and evaluate an AI tool called Make Sure Caries Detector and Classifier (MSc) for detecting pathological exposure of pulp on digital periapical radiographs and to compare its performance with dentists. METHODS This study was a diagnostic, multi-centric study, with 3461 digital periapical radiographs from three countries and seven centers. MSc was built using Yolov5-x model, and it was used for exposed and unexposed pulp detection. The dataset was split into a train, validate, and test dataset; the ratio was 8-1-1 to prevent overfitting. 345 images with 752 labels were randomly allocated to test MSc. The performance metrics used to test MSc performance included mean average precision (mAP), precision, F1 score, recall, and area under receiver operating characteristic curve (AUC). The metrics used to compare the performance with that of 10 certified dentists were: right diagnosis exposed (RDE), right diagnosis not exposed (RDNE), false diagnosis exposed (FDE), false diagnosis not exposed (FDNE), missed diagnosis (MD), and over diagnosis (OD). RESULTS MSc achieved a performance of more than 90% in all metrics examined: an average precision of 0.928, recall of 0.918, F1-score of 0.922, and AUC of 0.956 (P<.05). The results showed a higher mean of 1.94 for all right (correct) diagnosis parameters in MSc group, while a higher mean of 0.64 for all wrong diagnosis parameters in the dentists group (P<.05). CONCLUSIONS The designed MSc tool proved itself reliable in the detection and differentiating between exposed and unexposed pulp in the internally validated model. It also showed a better performance for the detection of exposed and unexposed pulp when compared to the 10 dentists' consensus.
Collapse
Affiliation(s)
| | - A Alsaeedi
- Department of Computer Science, College of Computer Science and Engineering, Taibah University, Medina, Saudi Arabia
| | - C Gonzalez-Losada
- School of Dentistry, Complutense University of Madrid, Madrid, Spain
| | - J H Lee
- Department of Periodontology, College of Dentistry and Institute of Oral Bioscience, Jeonbuk National University, Jeonju, Korea
| | - M Alabudh
- Ministry of Health, Medina, Saudi Arabia
| | - M Mirah
- Department of Dental Materials, Taibah University, Medina, Saudi Arabia
| | | | | |
Collapse
|
41
|
de Andrade JMC, Olescki G, Escuissato DL, Oliveira LF, Basso ACN, Salvador GL. Pixel-level annotated dataset of computed tomography angiography images of acute pulmonary embolism. Sci Data 2023; 10:518. [PMID: 37542053 PMCID: PMC10403591 DOI: 10.1038/s41597-023-02374-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 07/11/2023] [Indexed: 08/06/2023] Open
Abstract
Pulmonary embolism has a high incidence and mortality, especially if undiagnosed. The examination of choice for diagnosing the disease is computed tomography pulmonary angiography. As many factors can lead to misinterpretations and diagnostic errors, different groups are utilizing deep learning methods to help improve this process. The diagnostic accuracy of these methods tends to increase by augmenting the training dataset. Deep learning methods can potentially benefit from the use of images acquired with devices from different vendors. To the best of our knowledge, we have developed the first public dataset annotated at the pixel and image levels and the first pixel-level annotated dataset to contain examinations performed with equipment from Toshiba and GE. This dataset includes 40 examinations, half performed with each piece of equipment, representing samples from two medical services. We also included measurements related to the cardiac and circulatory consequences of pulmonary embolism. We encourage the use of this dataset to develop, evaluate and compare the performance of new AI algorithms designed to diagnose PE.
Collapse
Affiliation(s)
| | - Gabriel Olescki
- Department of Informatics, Federal University of Paraná, Curitiba, Brazil
| | - Dante Luiz Escuissato
- Department of Radiology and Image Diagnosis, Hospital de Clínicas, Federal University of Paraná, Curitiba, Brazil
| | | | - Ana Carolina Nicolleti Basso
- Department of Radiology and Image Diagnosis, Hospital de Clínicas, Federal University of Paraná, Curitiba, Brazil
| | - Gabriel Lucca Salvador
- Department of Radiology and Image Diagnosis, Hospital de Clínicas, Federal University of Paraná, Curitiba, Brazil
| |
Collapse
|
42
|
Hathaway QA, Hogg JP, Lakhani DA. Need for Medical Student Education in Emerging Technologies and Artificial Intelligence: Fostering Enthusiasm, Rather Than Flight, From Specialties Most Affected by Emerging Technologies. Acad Radiol 2023; 30:1770-1771. [PMID: 36464546 DOI: 10.1016/j.acra.2022.11.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 11/14/2022] [Indexed: 12/03/2022]
Affiliation(s)
- Quincy A Hathaway
- School of Medicine, West Virginia University, 1 Medical Center Drive, Morgantown, WV, USA
| | - Jeffery P Hogg
- School of Medicine, West Virginia University, 1 Medical Center Drive, Morgantown, WV, USA; Department of Radiology, West Virginia University, 1 Medical Center Drive, Morgantown, WV, USA
| | - Dhairya A Lakhani
- Department of Radiology, West Virginia University, 1 Medical Center Drive, Morgantown, WV, USA.
| |
Collapse
|
43
|
Ueda D, Matsumoto T, Ehara S, Yamamoto A, Walston SL, Ito A, Shimono T, Shiba M, Takeshita T, Fukuda D, Miki Y. Artificial intelligence-based model to classify cardiac functions from chest radiographs: a multi-institutional, retrospective model development and validation study. Lancet Digit Health 2023:S2589-7500(23)00107-3. [PMID: 37422342 DOI: 10.1016/s2589-7500(23)00107-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 04/04/2023] [Accepted: 05/23/2023] [Indexed: 07/10/2023]
Abstract
BACKGROUND Chest radiography is a common and widely available examination. Although cardiovascular structures-such as cardiac shadows and vessels-are visible on chest radiographs, the ability of these radiographs to estimate cardiac function and valvular disease is poorly understood. Using datasets from multiple institutions, we aimed to develop and validate a deep-learning model to simultaneously detect valvular disease and cardiac functions from chest radiographs. METHODS In this model development and validation study, we trained, validated, and externally tested a deep learning-based model to classify left ventricular ejection fraction, tricuspid regurgitant velocity, mitral regurgitation, aortic stenosis, aortic regurgitation, mitral stenosis, tricuspid regurgitation, pulmonary regurgitation, and inferior vena cava dilation from chest radiographs. The chest radiographs and associated echocardiograms were collected from four institutions between April 1, 2013, and Dec 31, 2021: we used data from three sites (Osaka Metropolitan University Hospital, Osaka, Japan; Habikino Medical Center, Habikino, Japan; and Morimoto Hospital, Osaka, Japan) for training, validation, and internal testing, and data from one site (Kashiwara Municipal Hospital, Kashiwara, Japan) for external testing. We evaluated the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy. FINDINGS We included 22 551 radiographs associated with 22 551 echocardiograms obtained from 16 946 patients. The external test dataset featured 3311 radiographs from 2617 patients with a mean age of 72 years [SD 15], of whom 49·8% were male and 50·2% were female. The AUCs, accuracy, sensitivity, and specificity for this dataset were 0·92 (95% CI 0·90-0·95), 86% (85-87), 82% (75-87), and 86% (85-88) for classifying the left ventricular ejection fraction at a 40% cutoff, 0·85 (0·83-0·87), 75% (73-76), 83% (80-87), and 73% (71-75) for classifying the tricuspid regurgitant velocity at a 2·8 m/s cutoff, 0·89 (0·86-0·92), 85% (84-86), 82% (76-87), and 85% (84-86) for classifying mitral regurgitation at the none-mild versus moderate-severe cutoff, 0·83 (0·78-0·88), 73% (71-74), 79% (69-87), and 72% (71-74) for classifying aortic stenosis, 0·83 (0·79-0·87), 68% (67-70), 88% (81-92), and 67% (66-69) for classifying aortic regurgitation, 0·86 (0·67-1·00), 90% (89-91), 83% (36-100), and 90% (89-91) for classifying mitral stenosis, 0·92 (0·89-0·94), 83% (82-85), 87% (83-91), and 83% (82-84) for classifying tricuspid regurgitation, 0·86 (0·82-0·90), 69% (68-71), 91% (84-95), and 68% (67-70) for classifying pulmonary regurgitation, and 0·85 (0·81-0·89), 86% (85-88), 73% (65-81), and 87% (86-88) for classifying inferior vena cava dilation. INTERPRETATION The deep learning-based model can accurately classify cardiac functions and valvular heart diseases using information from digital chest radiographs. This model can classify values typically obtained from echocardiography in a fraction of the time, with low system requirements and the potential to be continuously available in areas where echocardiography specialists are scarce or absent. FUNDING None.
Collapse
Affiliation(s)
- Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan; Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan.
| | - Toshimasa Matsumoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan; Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan
| | - Shoichi Ehara
- Department of Intensive Care Medicine, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Akira Yamamoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Shannon L Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Asahiro Ito
- Department of Cardiovascular Medicine, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Taro Shimono
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Masatsugu Shiba
- Department of Biofunctional Analysis, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan; Smart Life Science Lab, Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan
| | - Tohru Takeshita
- Department of Radiology, Osaka Habikino Medical Center, Habikino, Japan
| | - Daiju Fukuda
- Department of Cardiovascular Medicine, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| |
Collapse
|
44
|
Calimano-Ramirez LF, Virarkar MK, Hernandez M, Ozdemir S, Kumar S, Gopireddy DR, Lall C, Balaji KC, Mete M, Gumus KZ. MRI-based nomograms and radiomics in presurgical prediction of extraprostatic extension in prostate cancer: a systematic review. Abdom Radiol (NY) 2023; 48:2379-2400. [PMID: 37142824 DOI: 10.1007/s00261-023-03924-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 04/13/2023] [Accepted: 04/18/2023] [Indexed: 05/06/2023]
Abstract
PURPOSE Prediction of extraprostatic extension (EPE) is essential for accurate surgical planning in prostate cancer (PCa). Radiomics based on magnetic resonance imaging (MRI) has shown potential to predict EPE. We aimed to evaluate studies proposing MRI-based nomograms and radiomics for EPE prediction and assess the quality of current radiomics literature. METHODS We used PubMed, EMBASE, and SCOPUS databases to find related articles using synonyms for MRI radiomics and nomograms to predict EPE. Two co-authors scored the quality of radiomics literature using the Radiomics Quality Score (RQS). Inter-rater agreement was measured using the intraclass correlation coefficient (ICC) from total RQS scores. We analyzed the characteristic s of the studies and used ANOVAs to associate the area under the curve (AUC) to sample size, clinical and imaging variables, and RQS scores. RESULTS We identified 33 studies-22 nomograms and 11 radiomics analyses. The mean AUC for nomogram articles was 0.783, and no significant associations were found between AUC and sample size, clinical variables, or number of imaging variables. For radiomics articles, there were significant associations between number of lesions and AUC (p < 0.013). The average RQS total score was 15.91/36 (44%). Through the radiomics operation, segmentation of region-of-interest, selection of features, and model building resulted in a broader range of results. The qualities the studies lacked most were phantom tests for scanner variabilities, temporal variability, external validation datasets, prospective designs, cost-effectiveness analysis, and open science. CONCLUSION Utilizing MRI-based radiomics to predict EPE in PCa patients demonstrates promising outcomes. However, quality improvement and standardization of radiomics workflow are needed.
Collapse
Affiliation(s)
- Luis F Calimano-Ramirez
- Department of Radiology, University of Florida College of Medicine Jacksonville, Jacksonville, FL, 32209, USA
| | - Mayur K Virarkar
- Department of Radiology, University of Florida College of Medicine Jacksonville, Jacksonville, FL, 32209, USA
| | - Mauricio Hernandez
- Department of Radiology, University of Florida College of Medicine Jacksonville, Jacksonville, FL, 32209, USA
| | - Savas Ozdemir
- Department of Radiology, University of Florida College of Medicine Jacksonville, Jacksonville, FL, 32209, USA
| | - Sindhu Kumar
- Department of Radiology, University of Florida College of Medicine Jacksonville, Jacksonville, FL, 32209, USA
| | - Dheeraj R Gopireddy
- Department of Radiology, University of Florida College of Medicine Jacksonville, Jacksonville, FL, 32209, USA
| | - Chandana Lall
- Department of Radiology, University of Florida College of Medicine Jacksonville, Jacksonville, FL, 32209, USA
| | - K C Balaji
- Department of Urology, University of Florida College of Medicine, Jacksonville, FL, 32209, USA
| | - Mutlu Mete
- Department of Computer Science and Information System, Texas A&M University-Commerce, Commerce, TX, 75428, USA
| | - Kazim Z Gumus
- Department of Radiology, University of Florida College of Medicine Jacksonville, Jacksonville, FL, 32209, USA.
| |
Collapse
|
45
|
Kazmierski M, Welch M, Kim S, McIntosh C, Rey-McIntyre K, Huang SH, Patel T, Tadic T, Milosevic M, Liu FF, Ryczkowski A, Kazmierska J, Ye Z, Plana D, Aerts HJ, Kann BH, Bratman SV, Hope AJ, Haibe-Kains B. Multi-institutional Prognostic Modeling in Head and Neck Cancer: Evaluating Impact and Generalizability of Deep Learning and Radiomics. CANCER RESEARCH COMMUNICATIONS 2023; 3:1140-1151. [PMID: 37397861 PMCID: PMC10309070 DOI: 10.1158/2767-9764.crc-22-0152] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 11/14/2022] [Accepted: 05/19/2023] [Indexed: 07/04/2023]
Abstract
Artificial intelligence (AI) and machine learning (ML) are becoming critical in developing and deploying personalized medicine and targeted clinical trials. Recent advances in ML have enabled the integration of wider ranges of data including both medical records and imaging (radiomics). However, the development of prognostic models is complex as no modeling strategy is universally superior to others and validation of developed models requires large and diverse datasets to demonstrate that prognostic models developed (regardless of method) from one dataset are applicable to other datasets both internally and externally. Using a retrospective dataset of 2,552 patients from a single institution and a strict evaluation framework that included external validation on three external patient cohorts (873 patients), we crowdsourced the development of ML models to predict overall survival in head and neck cancer (HNC) using electronic medical records (EMR) and pretreatment radiological images. To assess the relative contributions of radiomics in predicting HNC prognosis, we compared 12 different models using imaging and/or EMR data. The model with the highest accuracy used multitask learning on clinical data and tumor volume, achieving high prognostic accuracy for 2-year and lifetime survival prediction, outperforming models relying on clinical data only, engineered radiomics, or complex deep neural network architecture. However, when we attempted to extend the best performing models from this large training dataset to other institutions, we observed significant reductions in the performance of the model in those datasets, highlighting the importance of detailed population-based reporting for AI/ML model utility and stronger validation frameworks. We have developed highly prognostic models for overall survival in HNC using EMRs and pretreatment radiological images based on a large, retrospective dataset of 2,552 patients from our institution.Diverse ML approaches were used by independent investigators. The model with the highest accuracy used multitask learning on clinical data and tumor volume.External validation of the top three performing models on three datasets (873 patients) with significant differences in the distributions of clinical and demographic variables demonstrated significant decreases in model performance. Significance ML combined with simple prognostic factors outperformed multiple advanced CT radiomics and deep learning methods. ML models provided diverse solutions for prognosis of patients with HNC but their prognostic value is affected by differences in patient populations and require extensive validation.
Collapse
Affiliation(s)
- Michal Kazmierski
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Mattea Welch
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- TECHNA Institute, Toronto, Ontario, Canada
| | - Sejin Kim
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Chris McIntosh
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- TECHNA Institute, Toronto, Ontario, Canada
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Katrina Rey-McIntyre
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Shao Hui Huang
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Ontario, Canada
| | - Tirth Patel
- TECHNA Institute, Toronto, Ontario, Canada
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| | - Tony Tadic
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Ontario, Canada
| | - Michael Milosevic
- TECHNA Institute, Toronto, Ontario, Canada
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Ontario, Canada
| | - Fei-Fei Liu
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Ontario, Canada
| | - Adam Ryczkowski
- Department of Medical Physics, Greater Poland Cancer Centre, Poznan, Poland
- Department of Electroradiology, University of Medical Sciences, Poznan, Poland
| | - Joanna Kazmierska
- Department of Electroradiology, University of Medical Sciences, Poznan, Poland
- Department of Radiotherapy II, Greater Poland Cancer Centre, Poznan, Poland
| | - Zezhong Ye
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, Massachusetts
- Department of Radiation Oncology, Dana-Farber Cancer Institute / Brigham and Women's Hosptial, Boston, Massachusetts
| | - Deborah Plana
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, Massachusetts
- Department of Radiation Oncology, Dana-Farber Cancer Institute / Brigham and Women's Hosptial, Boston, Massachusetts
| | - Hugo J.W.L. Aerts
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, Massachusetts
- Department of Radiation Oncology, Dana-Farber Cancer Institute / Brigham and Women's Hosptial, Boston, Massachusetts
- Radiology and Nuclear Medicine, CARIM and GROW, Maastricht University, Maastricht, the Netherlands
| | - Benjamin H. Kann
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, Massachusetts
- Department of Radiation Oncology, Dana-Farber Cancer Institute / Brigham and Women's Hosptial, Boston, Massachusetts
| | - Scott V. Bratman
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Ontario, Canada
| | - Andrew J. Hope
- Radiation Medicine Program, Princess Margaret Cancer Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Ontario, Canada
| | - Benjamin Haibe-Kains
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, Toronto, Ontario, Canada
| |
Collapse
|
46
|
Agrawal A, Khatri GD, Khurana B, Sodickson AD, Liang Y, Dreizin D. A survey of ASER members on artificial intelligence in emergency radiology: trends, perceptions, and expectations. Emerg Radiol 2023; 30:267-277. [PMID: 36913061 PMCID: PMC10362990 DOI: 10.1007/s10140-023-02121-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Accepted: 02/28/2023] [Indexed: 03/14/2023]
Abstract
PURPOSE There is a growing body of diagnostic performance studies for emergency radiology-related artificial intelligence/machine learning (AI/ML) tools; however, little is known about user preferences, concerns, experiences, expectations, and the degree of penetration of AI tools in emergency radiology. Our aim is to conduct a survey of the current trends, perceptions, and expectations regarding AI among American Society of Emergency Radiology (ASER) members. METHODS An anonymous and voluntary online survey questionnaire was e-mailed to all ASER members, followed by two reminder e-mails. A descriptive analysis of the data was conducted, and results summarized. RESULTS A total of 113 members responded (response rate 12%). The majority were attending radiologists (90%) with greater than 10 years' experience (80%) and from an academic practice (65%). Most (55%) reported use of commercial AI CAD tools in their practice. Workflow prioritization based on pathology detection, injury or disease severity grading and classification, quantitative visualization, and auto-population of structured reports were identified as high-value tasks. Respondents overwhelmingly indicated a need for explainable and verifiable tools (87%) and the need for transparency in the development process (80%). Most respondents did not feel that AI would reduce the need for emergency radiologists in the next two decades (72%) or diminish interest in fellowship programs (58%). Negative perceptions pertained to potential for automation bias (23%), over-diagnosis (16%), poor generalizability (15%), negative impact on training (11%), and impediments to workflow (10%). CONCLUSION ASER member respondents are in general optimistic about the impact of AI in the practice of emergency radiology and its impact on the popularity of emergency radiology as a subspecialty. The majority expect to see transparent and explainable AI models with the radiologist as the decision-maker.
Collapse
Affiliation(s)
- Anjali Agrawal
- New Delhi operations, Teleradiology Solutions, Delhi, India
| | - Garvit D Khatri
- Nuclear Medicine, Department of Radiology, University of Washington School of Medicine, Seattle, WA, USA
| | - Bharti Khurana
- Emergency Radiology, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Aaron D Sodickson
- Emergency Radiology, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Yuanyuan Liang
- Epidemiology & Public Health, University of Maryland School of Medicine, Baltimore, MD, USA
| | - David Dreizin
- Trauma and Emergency Radiology, Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
47
|
Kalra S, Wen J, Cresswell JC, Volkovs M, Tizhoosh HR. Decentralized federated learning through proxy model sharing. Nat Commun 2023; 14:2899. [PMID: 37217476 DOI: 10.1038/s41467-023-38569-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 05/08/2023] [Indexed: 05/24/2023] Open
Abstract
Institutions in highly regulated domains such as finance and healthcare often have restrictive rules around data sharing. Federated learning is a distributed learning framework that enables multi-institutional collaborations on decentralized data with improved protection for each collaborator's data privacy. In this paper, we propose a communication-efficient scheme for decentralized federated learning called ProxyFL, or proxy-based federated learning. Each participant in ProxyFL maintains two models, a private model, and a publicly shared proxy model designed to protect the participant's privacy. Proxy models allow efficient information exchange among participants without the need of a centralized server. The proposed method eliminates a significant limitation of canonical federated learning by allowing model heterogeneity; each participant can have a private model with any architecture. Furthermore, our protocol for communication by proxy leads to stronger privacy guarantees using differential privacy analysis. Experiments on popular image datasets, and a cancer diagnostic problem using high-quality gigapixel histology whole slide images, show that ProxyFL can outperform existing alternatives with much less communication overhead and stronger privacy.
Collapse
Affiliation(s)
- Shivam Kalra
- Layer 6 AI, Toronto, ON, Canada
- Kimia Lab, University of Waterloo, Toronto, ON, Canada
- Vector Institute, Toronto, ON, Canada
| | - Junfeng Wen
- Carleton University, School of Computer Science, Ottawa, ON, Canada
| | | | | | - H R Tizhoosh
- Kimia Lab, University of Waterloo, Toronto, ON, Canada.
- Vector Institute, Toronto, ON, Canada.
- Rhazes Lab, Dept. of AI & Informatics, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|
48
|
Krzywicki T, Brona P, Zbrzezny AM, Grzybowski AE. A Global Review of Publicly Available Datasets Containing Fundus Images: Characteristics, Barriers to Access, Usability, and Generalizability. J Clin Med 2023; 12:jcm12103587. [PMID: 37240693 DOI: 10.3390/jcm12103587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/29/2023] [Accepted: 05/17/2023] [Indexed: 05/28/2023] Open
Abstract
This article provides a comprehensive and up-to-date overview of the repositories that contain color fundus images. We analyzed them regarding availability and legality, presented the datasets' characteristics, and identified labeled and unlabeled image sets. This study aimed to complete all publicly available color fundus image datasets to create a central catalog of available color fundus image datasets.
Collapse
Affiliation(s)
- Tomasz Krzywicki
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland
| | - Piotr Brona
- Department of Ophthalmology, Poznan City Hospital, 61-285 Poznań, Poland
| | - Agnieszka M Zbrzezny
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland
- Faculty of Design, SWPS University of Social Sciences and Humanities, Chodakowska 19/31, 03-815 Warsaw, Poland
| | - Andrzej E Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, 60-836 Poznań, Poland
| |
Collapse
|
49
|
Khosravi P, Schweitzer M. Artificial intelligence in neuroradiology: a scoping review of some ethical challenges. FRONTIERS IN RADIOLOGY 2023; 3:1149461. [PMID: 37492387 PMCID: PMC10365008 DOI: 10.3389/fradi.2023.1149461] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 04/27/2023] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) has great potential to increase accuracy and efficiency in many aspects of neuroradiology. It provides substantial opportunities for insights into brain pathophysiology, developing models to determine treatment decisions, and improving current prognostication as well as diagnostic algorithms. Concurrently, the autonomous use of AI models introduces ethical challenges regarding the scope of informed consent, risks associated with data privacy and protection, potential database biases, as well as responsibility and liability that might potentially arise. In this manuscript, we will first provide a brief overview of AI methods used in neuroradiology and segue into key methodological and ethical challenges. Specifically, we discuss the ethical principles affected by AI approaches to human neuroscience and provisions that might be imposed in this domain to ensure that the benefits of AI frameworks remain in alignment with ethics in research and healthcare in the future.
Collapse
Affiliation(s)
- Pegah Khosravi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, NY, United States
| | - Mark Schweitzer
- Office of the Vice President for Health Affairs Office of the Vice President, Wayne State University, Detroit, MI, United States
| |
Collapse
|
50
|
Yamada A, Kamagata K, Hirata K, Ito R, Nakaura T, Ueda D, Fujita S, Fushimi Y, Fujima N, Matsui Y, Tatsugami F, Nozaki T, Fujioka T, Yanagawa M, Tsuboyama T, Kawamura M, Naganawa S. Clinical applications of artificial intelligence in liver imaging. LA RADIOLOGIA MEDICA 2023:10.1007/s11547-023-01638-1. [PMID: 37165151 DOI: 10.1007/s11547-023-01638-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 04/21/2023] [Indexed: 05/12/2023]
Abstract
This review outlines the current status and challenges of the clinical applications of artificial intelligence in liver imaging using computed tomography or magnetic resonance imaging based on a topic analysis of PubMed search results using latent Dirichlet allocation. LDA revealed that "segmentation," "hepatocellular carcinoma and radiomics," "metastasis," "fibrosis," and "reconstruction" were current main topic keywords. Automatic liver segmentation technology using deep learning is beginning to assume new clinical significance as part of whole-body composition analysis. It has also been applied to the screening of large populations and the acquisition of training data for machine learning models and has resulted in the development of imaging biomarkers that have a significant impact on important clinical issues, such as the estimation of liver fibrosis, recurrence, and prognosis of malignant tumors. Deep learning reconstruction is expanding as a new technological clinical application of artificial intelligence and has shown results in reducing contrast and radiation doses. However, there is much missing evidence, such as external validation of machine learning models and the evaluation of the diagnostic performance of specific diseases using deep learning reconstruction, suggesting that the clinical application of these technologies is still in development.
Collapse
Affiliation(s)
- Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, Matsumoto, Nagano, Japan.
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-Ku, Tokyo, Japan
| | - Kenji Hirata
- Department of Nuclear Medicine, Hokkaido University Hospital, Sapporo, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, Chuo-Ku, Kumamoto, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Abeno-Ku, Osaka, Japan
| | - Shohei Fujita
- Department of Radiology, University of Tokyo, Tokyo, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Sakyoku, Kyoto, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Kita-Ku, Okayama, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, Minami-Ku, Hiroshima City, Hiroshima, Japan
| | - Taiki Nozaki
- Department of Radiology, St. Luke's International Hospital, Tokyo, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, Suita-City, Osaka, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, Suita-City, Osaka, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| |
Collapse
|