1
|
Savage N. AI's keen diagnostic eye. Nature 2024:10.1038/d41586-024-01132-2. [PMID: 38637706 DOI: 10.1038/d41586-024-01132-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2024]
|
2
|
Bhayana R, Biswas S, Cook TS, Kim W, Kitamura FC, Gichoya J, Yi PH. From Bench to Bedside With Large Language Models: AJR Expert Panel Narrative Review. AJR Am J Roentgenol 2024. [PMID: 38598354 DOI: 10.2214/ajr.24.30928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Large language models (LLMs) hold immense potential to revolutionize radiology. However, their integration into practice requires careful consideration. Artificial intelligence (AI) chatbots and general-purpose LLMs have potential pitfalls related to privacy, transparency, and accuracy, limiting their current clinical readiness. Thus, LLM-based tools must be optimized for radiology practice to overcome these limitations. While research and validation for radiology applications remain in their infancy, commercial products incorporating LLMs are becoming available alongside promises of transforming practice. To help radiologists navigate this landscape, this AJR Expert Panel Narrative Review provides a multidimensional perspective on LLMs, encompassing considerations from bench (development and optimization) to bedside (use in practice). At present, LLMs are not autonomous entities that can replace expert decision-making, and radiologists remain responsible for the content of their reports. Patient-facing tools, particularly medical AI chatbots, require additional guardrails to ensure safety and prevent misuse. Still, if responsibly implemented, LLMs are well-positioned to transform efficiency and quality in radiology. Radiologists must be well-informed and proactively involved in guiding the implementation of LLMs in practice to mitigate risks and maximize benefits to patient care.
Collapse
Affiliation(s)
- Rajesh Bhayana
- University Medical Imaging Toronto, Joint Department of Medical Imaging, University Health Network, University of Toronto, Toronto, ON, Canada
| | - Som Biswas
- Department of Radiology, Le Bonheur Children's Hospital, University of Tennessee Health Science Center, Memphis, TN, USA
| | - Tessa S Cook
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | - Woojin Kim
- Department of Radiology, Palo Alto VA Medical Center, Palo Alto, CA
| | - Felipe C Kitamura
- Department of Diagnostic Imaging, Universidade Federal de São Paulo, São Paulo, Brazil
- Dasa, São Paulo, Brazil
| | - Judy Gichoya
- Department of Radiology, Emory University School of Medicine, Georgia, U.S.A
| | - Paul H Yi
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD
| |
Collapse
|
3
|
Lin J, Yang J, Yin M, Tang Y, Chen L, Xu C, Zhu S, Gao J, Liu L, Liu X, Gu C, Huang Z, Wei Y, Zhu J. Development and Validation of Multimodal Models to Predict the 30-Day Mortality of ICU Patients Based on Clinical Parameters and Chest X-Rays. J Imaging Inform Med 2024:10.1007/s10278-024-01066-1. [PMID: 38448758 DOI: 10.1007/s10278-024-01066-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 02/21/2024] [Accepted: 02/22/2024] [Indexed: 03/08/2024]
Abstract
We aimed to develop and validate multimodal ICU patient prognosis models that combine clinical parameters data and chest X-ray (CXR) images. A total of 3798 subjects with clinical parameters and CXR images were extracted from the Medical Information Mart for Intensive Care IV (MIMIC-IV) database and an external hospital (the test set). The primary outcome was 30-day mortality after ICU admission. Automated machine learning (AutoML) and convolutional neural networks (CNNs) were used to construct single-modal models based on clinical parameters and CXR separately. An early fusion approach was used to integrate both modalities (clinical parameters and CXR) into a multimodal model named PrismICU. Compared to the single-modal models, i.e., the clinical parameter model (AUC = 0.80, F1-score = 0.43) and the CXR model (AUC = 0.76, F1-score = 0.45) and the scoring system APACHE II (AUC = 0.83, F1-score = 0.77), PrismICU (AUC = 0.95, F1 score = 0.95) showed improved performance in predicting the 30-day mortality in the validation set. In the test set, PrismICU (AUC = 0.82, F1-score = 0.61) was also better than the clinical parameters model (AUC = 0.72, F1-score = 0.50), CXR model (AUC = 0.71, F1-score = 0.36), and APACHE II (AUC = 0.62, F1-score = 0.50). PrismICU, which integrated clinical parameters data and CXR images, performed better than single-modal models and the existing scoring system. It supports the potential of multimodal models based on structured data and imaging in clinical management.
Collapse
Affiliation(s)
- Jiaxi Lin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Jin Yang
- Department of Critical Care Medicine, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
| | - Minyue Yin
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Yuxiu Tang
- Department of Critical Care Medicine, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
| | - Liquan Chen
- Department of Critical Care Medicine, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
| | - Chang Xu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Shiqi Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Jingwen Gao
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Lu Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Xiaolin Liu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China
| | - Chenqi Gu
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Zhou Huang
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Yao Wei
- Department of Critical Care Medicine, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China.
| | - Jinzhou Zhu
- Department of Gastroenterology, The First Affiliated Hospital of Soochow University, 188 Shizi Street, Jiangsu, Suzhou 215006, China.
- Suzhou Clinical Center of Digestive Diseases, Suzhou, China.
| |
Collapse
|
4
|
Tayebi Arasteh S, Misera L, Kather JN, Truhn D, Nebelung S. Enhancing diagnostic deep learning via self-supervised pretraining on large-scale, unlabeled non-medical images. Eur Radiol Exp 2024; 8:10. [PMID: 38326501 PMCID: PMC10850044 DOI: 10.1186/s41747-023-00411-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 11/22/2023] [Indexed: 02/09/2024] Open
Abstract
BACKGROUND Pretraining labeled datasets, like ImageNet, have become a technical standard in advanced medical image analysis. However, the emergence of self-supervised learning (SSL), which leverages unlabeled data to learn robust features, presents an opportunity to bypass the intensive labeling process. In this study, we explored if SSL for pretraining on non-medical images can be applied to chest radiographs and how it compares to supervised pretraining on non-medical images and on medical images. METHODS We utilized a vision transformer and initialized its weights based on the following: (i) SSL pretraining on non-medical images (DINOv2), (ii) supervised learning (SL) pretraining on non-medical images (ImageNet dataset), and (iii) SL pretraining on chest radiographs from the MIMIC-CXR database, the largest labeled public dataset of chest radiographs to date. We tested our approach on over 800,000 chest radiographs from 6 large global datasets, diagnosing more than 20 different imaging findings. Performance was quantified using the area under the receiver operating characteristic curve and evaluated for statistical significance using bootstrapping. RESULTS SSL pretraining on non-medical images not only outperformed ImageNet-based pretraining (p < 0.001 for all datasets) but, in certain cases, also exceeded SL on the MIMIC-CXR dataset. Our findings suggest that selecting the right pretraining strategy, especially with SSL, can be pivotal for improving diagnostic accuracy of artificial intelligence in medical imaging. CONCLUSIONS By demonstrating the promise of SSL in chest radiograph analysis, we underline a transformative shift towards more efficient and accurate AI models in medical imaging. RELEVANCE STATEMENT Self-supervised learning highlights a paradigm shift towards the enhancement of AI-driven accuracy and efficiency in medical imaging. Given its promise, the broader application of self-supervised learning in medical imaging calls for deeper exploration, particularly in contexts where comprehensive annotated datasets are limited.
Collapse
Affiliation(s)
- Soroosh Tayebi Arasteh
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Aachen, Germany.
| | - Leo Misera
- Institute and Polyclinic for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Carl Gustav Carus Dresden, Technische Universität Dresden, Dresden, Germany
- Else Kröner Fresenius Center for Digital Health, Technische Universität Dresden, Dresden, Germany
| | - Jakob Nikolas Kather
- Else Kröner Fresenius Center for Digital Health, Technische Universität Dresden, Dresden, Germany
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
- Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Aachen, Germany
| | - Sven Nebelung
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Aachen, Germany
| |
Collapse
|
5
|
Takahashi K, Usuzaki T, Inamori R. Transformer Unlocks the Gateway to Advanced Research: Predicting Diseases on Chest Radiographs Using Multimodal Data. Radiology 2024; 310:e232760. [PMID: 38349242 DOI: 10.1148/radiol.232760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2024]
Affiliation(s)
- Kengo Takahashi
- * Tohoku University Graduate School of Medicine, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8573, Miyagi, Japan
| | - Takuma Usuzaki
- Department of Diagnostic Radiology, Tohoku University Hospital, Miyagi, Japan
| | - Ryusei Inamori
- * Tohoku University Graduate School of Medicine, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8573, Miyagi, Japan
| |
Collapse
|
6
|
Park SH. Noteworthy Developments in the Korean Journal of Radiology in 2023 and for 2024. Korean J Radiol 2024; 25:1-5. [PMID: 38184762 PMCID: PMC10788598 DOI: 10.3348/kjr.2023.1172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 11/22/2023] [Indexed: 01/08/2024] Open
Affiliation(s)
- Seong Ho Park
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
7
|
Bhayana R. Chatbots and Large Language Models in Radiology: A Practical Primer for Clinical and Research Applications. Radiology 2024; 310:e232756. [PMID: 38226883 DOI: 10.1148/radiol.232756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Although chatbots have existed for decades, the emergence of transformer-based large language models (LLMs) has captivated the world through the most recent wave of artificial intelligence chatbots, including ChatGPT. Transformers are a type of neural network architecture that enables better contextual understanding of language and efficient training on massive amounts of unlabeled data, such as unstructured text from the internet. As LLMs have increased in size, their improved performance and emergent abilities have revolutionized natural language processing. Since language is integral to human thought, applications based on LLMs have transformative potential in many industries. In fact, LLM-based chatbots have demonstrated human-level performance on many professional benchmarks, including in radiology. LLMs offer numerous clinical and research applications in radiology, several of which have been explored in the literature with encouraging results. Multimodal LLMs can simultaneously interpret text and images to generate reports, closely mimicking current diagnostic pathways in radiology. Thus, from requisition to report, LLMs have the opportunity to positively impact nearly every step of the radiology journey. Yet, these impressive models are not without limitations. This article reviews the limitations of LLMs and mitigation strategies, as well as potential uses of LLMs, including multimodal models. Also reviewed are existing LLM-based applications that can enhance efficiency in supervised settings.
Collapse
Affiliation(s)
- Rajesh Bhayana
- From University Medical Imaging Toronto, Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital, and Women's College Hospital, University of Toronto, Toronto General Hospital, 200 Elizabeth St, Peter Munk Bldg, 1st Fl, Toronto, ON, Canada M5G 24C
| |
Collapse
|
8
|
Gefter WB, Prokop M, Seo JB, Raoof S, Langlotz CP, Hatabu H. Human-AI Symbiosis: A Path Forward to Improve Chest Radiography and the Role of Radiologists in Patient Care. Radiology 2024; 310:e232778. [PMID: 38259206 PMCID: PMC10831473 DOI: 10.1148/radiol.232778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 12/08/2023] [Accepted: 12/18/2023] [Indexed: 01/24/2024]
Affiliation(s)
- Warren B. Gefter
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands (M.P.); Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea (J.B.S.); Department of Medicine and Radiology, Zucker School of Medicine, Hofstra/Northwell and Lung Institute, Lenox Hill Hospital, New York, NY (S.R.); Department of Radiology and Biomedical Informatics and Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Palo Alto, Calif (C.P.L.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Mathias Prokop
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands (M.P.); Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea (J.B.S.); Department of Medicine and Radiology, Zucker School of Medicine, Hofstra/Northwell and Lung Institute, Lenox Hill Hospital, New York, NY (S.R.); Department of Radiology and Biomedical Informatics and Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Palo Alto, Calif (C.P.L.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Joon Beom Seo
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands (M.P.); Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea (J.B.S.); Department of Medicine and Radiology, Zucker School of Medicine, Hofstra/Northwell and Lung Institute, Lenox Hill Hospital, New York, NY (S.R.); Department of Radiology and Biomedical Informatics and Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Palo Alto, Calif (C.P.L.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Suhail Raoof
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands (M.P.); Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea (J.B.S.); Department of Medicine and Radiology, Zucker School of Medicine, Hofstra/Northwell and Lung Institute, Lenox Hill Hospital, New York, NY (S.R.); Department of Radiology and Biomedical Informatics and Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Palo Alto, Calif (C.P.L.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Curtis P. Langlotz
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands (M.P.); Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea (J.B.S.); Department of Medicine and Radiology, Zucker School of Medicine, Hofstra/Northwell and Lung Institute, Lenox Hill Hospital, New York, NY (S.R.); Department of Radiology and Biomedical Informatics and Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Palo Alto, Calif (C.P.L.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Hiroto Hatabu
- From the Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands (M.P.); Department of Radiology, Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea (J.B.S.); Department of Medicine and Radiology, Zucker School of Medicine, Hofstra/Northwell and Lung Institute, Lenox Hill Hospital, New York, NY (S.R.); Department of Radiology and Biomedical Informatics and Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Palo Alto, Calif (C.P.L.); and Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| |
Collapse
|
9
|
Truhn D, Weber CD, Braun BJ, Bressem K, Kather JN, Kuhl C, Nebelung S. A pilot study on the efficacy of GPT-4 in providing orthopedic treatment recommendations from MRI reports. Sci Rep 2023; 13:20159. [PMID: 37978240 PMCID: PMC10656559 DOI: 10.1038/s41598-023-47500-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 11/14/2023] [Indexed: 11/19/2023] Open
Abstract
Large language models (LLMs) have shown potential in various applications, including clinical practice. However, their accuracy and utility in providing treatment recommendations for orthopedic conditions remain to be investigated. Thus, this pilot study aims to evaluate the validity of treatment recommendations generated by GPT-4 for common knee and shoulder orthopedic conditions using anonymized clinical MRI reports. A retrospective analysis was conducted using 20 anonymized clinical MRI reports, with varying severity and complexity. Treatment recommendations were elicited from GPT-4 and evaluated by two board-certified specialty-trained senior orthopedic surgeons. Their evaluation focused on semiquantitative gradings of accuracy and clinical utility and potential limitations of the LLM-generated recommendations. GPT-4 provided treatment recommendations for 20 patients (mean age, 50 years ± 19 [standard deviation]; 12 men) with acute and chronic knee and shoulder conditions. The LLM produced largely accurate and clinically useful recommendations. However, limited awareness of a patient's overall situation, a tendency to incorrectly appreciate treatment urgency, and largely schematic and unspecific treatment recommendations were observed and may reduce its clinical usefulness. In conclusion, LLM-based treatment recommendations are largely adequate and not prone to 'hallucinations', yet inadequate in particular situations. Critical guidance by healthcare professionals is obligatory, and independent use by patients is discouraged, given the dependency on precise data input.
Collapse
Grants
- ODELIA, 101057091 European Union's Horizon Europe programme
- COMFORT, 101079894 European Union's Horizon Europe programme
- TR 1700/7-1 Deutsche Forschungsgemeinschaft
- NE 2136/3-1 Deutsche Forschungsgemeinschaft
- DEEP LIVER, ZMVI1-2520DAT111 Bundesministerium für Gesundheit
- #70113864 Max-Eder-Programme of the German Cancer Aid
- PEARL, 01KD2104C German Federal Ministry of Education and Research
- CAMINO, 01EO2101 German Federal Ministry of Education and Research
- SWAG, 01KD2215A German Federal Ministry of Education and Research
- TRANSFORM LIVER, 031L0312A German Federal Ministry of Education and Research
- TANGERINE, 01KT2302 through ERA-NET Transcan German Federal Ministry of Education and Research
- SECAI, 57616814 Deutscher Akademischer Austauschdienst
- Transplant.KI, 01VSF21048 German Federal Joint Committee
- ODELIA, 101057091 European Union's Horizon Europe and innovation programme
- GENIAL, 101096312 European Union's Horizon Europe and innovation programme
- NIHR, NIHR213331 National Institute for Health and Care Research
- European Union’s Horizon Europe programme
- European Union’s Horizon Europe and innovation programme
- RWTH Aachen University (3131)
Collapse
Affiliation(s)
- Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Pauwels Street 30, 52074, Aachen, Germany
| | - Christian D Weber
- Department of Orthopaedics and Trauma Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Benedikt J Braun
- University Hospital Tuebingen on Behalf of the Eberhard-Karls-University Tuebingen, BG Hospital, Schnarrenbergstr. 95, Tübingen, Germany
| | - Keno Bressem
- Department of Radiology, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Jakob N Kather
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
- Department of Medicine I, University Hospital Dresden, Dresden, Germany
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
- Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany
| | - Christiane Kuhl
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Pauwels Street 30, 52074, Aachen, Germany
| | - Sven Nebelung
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH Aachen, Pauwels Street 30, 52074, Aachen, Germany.
| |
Collapse
|
10
|
Kitamura FC, Topol EJ. The Initial Steps of Multimodal AI in Radiology. Radiology 2023; 309:e232372. [PMID: 37787677 PMCID: PMC10623182 DOI: 10.1148/radiol.232372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 09/06/2023] [Indexed: 10/04/2023]
Affiliation(s)
- Felipe C. Kitamura
- From the Department of Applied Innovation and AI, Dasa, Av Das
Nações Unidas, 7815 Pinheiros, São Paulo, SP, Brazil
05425-070 (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de
São Paulo, Unifesp, São Paulo, Brazil (F.C.K.); and Scripps
Research, La Jolla, Calif (E.J.T.)
| | - Eric J. Topol
- From the Department of Applied Innovation and AI, Dasa, Av Das
Nações Unidas, 7815 Pinheiros, São Paulo, SP, Brazil
05425-070 (F.C.K.); Department of Diagnostic Imaging, Universidade Federal de
São Paulo, Unifesp, São Paulo, Brazil (F.C.K.); and Scripps
Research, La Jolla, Calif (E.J.T.)
| |
Collapse
|