1
|
Grossman S, Zerilli T, Nathan JP. Appropriateness of ChatGPT as a resource for medication-related questions. Br J Clin Pharmacol 2024. [PMID: 39096130 DOI: 10.1111/bcp.16212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Revised: 07/04/2024] [Accepted: 07/22/2024] [Indexed: 08/04/2024] Open
Abstract
With its increasing popularity, healthcare professionals and patients may use ChatGPT to obtain medication-related information. This study was conducted to assess ChatGPT's ability to provide satisfactory responses (i.e., directly answers the question, accurate, complete and relevant) to medication-related questions posed to an academic drug information service. ChatGPT responses were compared to responses generated by the investigators through the use of traditional resources, and references were evaluated. Thirty-nine questions were entered into ChatGPT; the three most common categories were therapeutics (8; 21%), compounding/formulation (6; 15%) and dosage (5; 13%). Ten (26%) questions were answered satisfactorily by ChatGPT. Of the 29 (74%) questions that were not answered satisfactorily, deficiencies included lack of a direct response (11; 38%), lack of accuracy (11; 38%) and/or lack of completeness (12; 41%). References were included with eight (29%) responses; each included fabricated references. Presently, healthcare professionals and consumers should be cautioned against using ChatGPT for medication-related information.
Collapse
Affiliation(s)
- Sara Grossman
- LIU Pharmacy, Arnold & Marie Schwartz College of Pharmacy and Health Sciences, Brooklyn, New York, USA
| | - Tina Zerilli
- LIU Pharmacy, Arnold & Marie Schwartz College of Pharmacy and Health Sciences, Brooklyn, New York, USA
| | - Joseph P Nathan
- LIU Pharmacy, Arnold & Marie Schwartz College of Pharmacy and Health Sciences, Brooklyn, New York, USA
| |
Collapse
|
2
|
Hoppe JM, Auer MK, Strüven A, Massberg S, Stremmel C. ChatGPT With GPT-4 Outperforms Emergency Department Physicians in Diagnostic Accuracy: Retrospective Analysis. J Med Internet Res 2024; 26:e56110. [PMID: 38976865 PMCID: PMC11263899 DOI: 10.2196/56110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Revised: 04/08/2024] [Accepted: 05/08/2024] [Indexed: 07/10/2024] Open
Abstract
BACKGROUND OpenAI's ChatGPT is a pioneering artificial intelligence (AI) in the field of natural language processing, and it holds significant potential in medicine for providing treatment advice. Additionally, recent studies have demonstrated promising results using ChatGPT for emergency medicine triage. However, its diagnostic accuracy in the emergency department (ED) has not yet been evaluated. OBJECTIVE This study compares the diagnostic accuracy of ChatGPT with GPT-3.5 and GPT-4 and primary treating resident physicians in an ED setting. METHODS Among 100 adults admitted to our ED in January 2023 with internal medicine issues, the diagnostic accuracy was assessed by comparing the diagnoses made by ED resident physicians and those made by ChatGPT with GPT-3.5 or GPT-4 against the final hospital discharge diagnosis, using a point system for grading accuracy. RESULTS The study enrolled 100 patients with a median age of 72 (IQR 58.5-82.0) years who were admitted to our internal medicine ED primarily for cardiovascular, endocrine, gastrointestinal, or infectious diseases. GPT-4 outperformed both GPT-3.5 (P<.001) and ED resident physicians (P=.01) in diagnostic accuracy for internal medicine emergencies. Furthermore, across various disease subgroups, GPT-4 consistently outperformed GPT-3.5 and resident physicians. It demonstrated significant superiority in cardiovascular (GPT-4 vs ED physicians: P=.03) and endocrine or gastrointestinal diseases (GPT-4 vs GPT-3.5: P=.01). However, in other categories, the differences were not statistically significant. CONCLUSIONS In this study, which compared the diagnostic accuracy of GPT-3.5, GPT-4, and ED resident physicians against a discharge diagnosis gold standard, GPT-4 outperformed both the resident physicians and its predecessor, GPT-3.5. Despite the retrospective design of the study and its limited sample size, the results underscore the potential of AI as a supportive diagnostic tool in ED settings.
Collapse
Affiliation(s)
| | - Matthias K Auer
- Department of Medicine IV, LMU University Hospital, Munich, Germany
| | - Anna Strüven
- Department of Medicine I, LMU University Hospital, Munich, Germany
- Munich Heart Alliance Partner Site, Deutsches Zentrum für Herz-Kreislaufforschung (German Centre for Cardiovascular Research), LMU University Hospital, Munich, Germany
| | - Steffen Massberg
- Department of Medicine I, LMU University Hospital, Munich, Germany
- Munich Heart Alliance Partner Site, Deutsches Zentrum für Herz-Kreislaufforschung (German Centre for Cardiovascular Research), LMU University Hospital, Munich, Germany
| | - Christopher Stremmel
- Department of Medicine I, LMU University Hospital, Munich, Germany
- Munich Heart Alliance Partner Site, Deutsches Zentrum für Herz-Kreislaufforschung (German Centre for Cardiovascular Research), LMU University Hospital, Munich, Germany
| |
Collapse
|
3
|
Kassab J, Hadi El Hajjar A, Wardrop RM, Brateanu A. Accuracy of Online Artificial Intelligence Models in Primary Care Settings. Am J Prev Med 2024; 66:1054-1059. [PMID: 38354991 DOI: 10.1016/j.amepre.2024.02.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 02/06/2024] [Accepted: 02/07/2024] [Indexed: 02/16/2024]
Abstract
INTRODUCTION The importance of preventive medicine and primary care in the sphere of public health is expanding, yet a gap exists in the utilization of recommended medical services. As patients increasingly turn to online resources for supplementary advice, the role of artificial intelligence (AI) in providing accurate and reliable information has emerged. The present study aimed to assess ChatGPT-4's and Google Bard's capacity to deliver accurate recommendations in preventive medicine and primary care. METHODS Fifty-six questions were formulated and presented to ChatGPT-4 in June 2023 and Google Bard in October 2023, and the responses were independently reviewed by two physicians, with each answer being classified as "accurate," "inaccurate," or "accurate with missing information." Disagreements were resolved by a third physician. RESULTS Initial inter-reviewer agreement on grading was substantial (Cohen's Kappa was 0.76, 95%CI [0.61-0.90] for ChatGPT-4 and 0.89, 95%CI [0.79-0.99] for Bard). After reaching a consensus, 28.6% of ChatGPT-4-generated answers were deemed accurate, 28.6% inaccurate, and 42.8% accurate with missing information. In comparison, 53.6% of Bard-generated answers were deemed accurate, 17.8% inaccurate, and 28.6% accurate with missing information. Responses to CDC and immunization-related questions showed notable inaccuracies (80%) in both models. CONCLUSIONS ChatGPT-4 and Bard demonstrated potential in offering accurate information in preventive care. It also brought to light the critical need for regular updates, particularly in the rapidly evolving areas of medicine. A significant proportion of the AI models' responses were deemed "accurate with missing information," emphasizing the importance of viewing AI tools as complementary resources when seeking medical information.
Collapse
Affiliation(s)
- Joseph Kassab
- Research Institute, Cleveland Clinic Foundation, Cleveland, Ohio
| | | | - Richard M Wardrop
- Department of Internal Medicine, Cleveland Clinic Foundation, Cleveland, Ohio
| | - Andrei Brateanu
- Department of Internal Medicine, Cleveland Clinic Foundation, Cleveland, Ohio.
| |
Collapse
|
4
|
Igarashi Y, Nakahara K, Norii T, Miyake N, Tagami T, Yokobori S. Performance of a Large Language Model on Japanese Emergency Medicine Board Certification Examinations. J NIPPON MED SCH 2024; 91:155-161. [PMID: 38432929 DOI: 10.1272/jnms.jnms.2024_91-205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2024]
Abstract
BACKGROUND Emergency physicians need a broad range of knowledge and skills to address critical medical, traumatic, and environmental conditions. Artificial intelligence (AI), including large language models (LLMs), has potential applications in healthcare settings; however, the performance of LLMs in emergency medicine remains unclear. METHODS To evaluate the reliability of information provided by ChatGPT, an LLM was given the questions set by the Japanese Association of Acute Medicine in its board certification examinations over a period of 5 years (2018-2022) and programmed to answer them twice. Statistical analysis was used to assess agreement of the two responses. RESULTS The LLM successfully answered 465 of the 475 text-based questions, achieving an overall correct response rate of 62.3%. For questions without images, the rate of correct answers was 65.9%. For questions with images that were not explained to the LLM, the rate of correct answers was only 52.0%. The annual rates of correct answers to questions without images ranged from 56.3% to 78.8%. Accuracy was better for scenario-based questions (69.1%) than for stand-alone questions (62.1%). Agreement between the two responses was substantial (kappa = 0.70). Factual error accounted for 82% of the incorrectly answered questions. CONCLUSION An LLM performed satisfactorily on an emergency medicine board certification examination in Japanese and without images. However, factual errors in the responses highlight the need for physician oversight when using LLMs.
Collapse
Affiliation(s)
- Yutaka Igarashi
- Department of Emergency and Critical Care Medicine, Nippon Medical School
| | - Kyoichi Nakahara
- Department of Emergency and Critical Care Medicine, Nippon Medical School
| | - Tatsuya Norii
- Department of Emergency Medicine, University of New Mexico, NM, United States of America
| | - Nodoka Miyake
- Department of Emergency and Critical Care Medicine, Nippon Medical School
| | - Takashi Tagami
- Department of Emergency and Critical Care Medicine, Nippon Medical School Musashi Kosugi Hospital
| | - Shoji Yokobori
- Department of Emergency and Critical Care Medicine, Nippon Medical School
| |
Collapse
|
5
|
Preiksaitis C, Ashenburg N, Bunney G, Chu A, Kabeer R, Riley F, Ribeira R, Rose C. The Role of Large Language Models in Transforming Emergency Medicine: Scoping Review. JMIR Med Inform 2024; 12:e53787. [PMID: 38728687 PMCID: PMC11127144 DOI: 10.2196/53787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 12/20/2023] [Accepted: 04/05/2024] [Indexed: 05/12/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI), more specifically large language models (LLMs), holds significant potential in revolutionizing emergency care delivery by optimizing clinical workflows and enhancing the quality of decision-making. Although enthusiasm for integrating LLMs into emergency medicine (EM) is growing, the existing literature is characterized by a disparate collection of individual studies, conceptual analyses, and preliminary implementations. Given these complexities and gaps in understanding, a cohesive framework is needed to comprehend the existing body of knowledge on the application of LLMs in EM. OBJECTIVE Given the absence of a comprehensive framework for exploring the roles of LLMs in EM, this scoping review aims to systematically map the existing literature on LLMs' potential applications within EM and identify directions for future research. Addressing this gap will allow for informed advancements in the field. METHODS Using PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) criteria, we searched Ovid MEDLINE, Embase, Web of Science, and Google Scholar for papers published between January 2018 and August 2023 that discussed LLMs' use in EM. We excluded other forms of AI. A total of 1994 unique titles and abstracts were screened, and each full-text paper was independently reviewed by 2 authors. Data were abstracted independently, and 5 authors performed a collaborative quantitative and qualitative synthesis of the data. RESULTS A total of 43 papers were included. Studies were predominantly from 2022 to 2023 and conducted in the United States and China. We uncovered four major themes: (1) clinical decision-making and support was highlighted as a pivotal area, with LLMs playing a substantial role in enhancing patient care, notably through their application in real-time triage, allowing early recognition of patient urgency; (2) efficiency, workflow, and information management demonstrated the capacity of LLMs to significantly boost operational efficiency, particularly through the automation of patient record synthesis, which could reduce administrative burden and enhance patient-centric care; (3) risks, ethics, and transparency were identified as areas of concern, especially regarding the reliability of LLMs' outputs, and specific studies highlighted the challenges of ensuring unbiased decision-making amidst potentially flawed training data sets, stressing the importance of thorough validation and ethical oversight; and (4) education and communication possibilities included LLMs' capacity to enrich medical training, such as through using simulated patient interactions that enhance communication skills. CONCLUSIONS LLMs have the potential to fundamentally transform EM, enhancing clinical decision-making, optimizing workflows, and improving patient outcomes. This review sets the stage for future advancements by identifying key research areas: prospective validation of LLM applications, establishing standards for responsible use, understanding provider and patient perceptions, and improving physicians' AI literacy. Effective integration of LLMs into EM will require collaborative efforts and thorough evaluation to ensure these technologies can be safely and effectively applied.
Collapse
Affiliation(s)
- Carl Preiksaitis
- Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, CA, United States
| | - Nicholas Ashenburg
- Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, CA, United States
| | - Gabrielle Bunney
- Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, CA, United States
| | - Andrew Chu
- Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, CA, United States
| | - Rana Kabeer
- Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, CA, United States
| | - Fran Riley
- Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, CA, United States
| | - Ryan Ribeira
- Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, CA, United States
| | - Christian Rose
- Department of Emergency Medicine, Stanford University School of Medicine, Palo Alto, CA, United States
| |
Collapse
|
6
|
Yang J, Ardavanis KS, Slack KE, Fernando ND, Della Valle CJ, Hernandez NM. Chat Generative Pretrained Transformer (ChatGPT) and Bard: Artificial Intelligence Does not yet Provide Clinically Supported Answers for Hip and Knee Osteoarthritis. J Arthroplasty 2024; 39:1184-1190. [PMID: 38237878 DOI: 10.1016/j.arth.2024.01.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 01/08/2024] [Accepted: 01/11/2024] [Indexed: 02/22/2024] Open
Abstract
BACKGROUND Advancements in artificial intelligence (AI) have led to the creation of large language models (LLMs), such as Chat Generative Pretrained Transformer (ChatGPT) and Bard, that analyze online resources to synthesize responses to user queries. Despite their popularity, the accuracy of LLM responses to medical questions remains unknown. This study aimed to compare the responses of ChatGPT and Bard regarding treatments for hip and knee osteoarthritis with the American Academy of Orthopaedic Surgeons (AAOS) Evidence-Based Clinical Practice Guidelines (CPGs) recommendations. METHODS Both ChatGPT (Open AI) and Bard (Google) were queried regarding 20 treatments (10 for hip and 10 for knee osteoarthritis) from the AAOS CPGs. Responses were classified by 2 reviewers as being in "Concordance," "Discordance," or "No Concordance" with AAOS CPGs. A Cohen's Kappa coefficient was used to assess inter-rater reliability, and Chi-squared analyses were used to compare responses between LLMs. RESULTS Overall, ChatGPT and Bard provided responses that were concordant with the AAOS CPGs for 16 (80%) and 12 (60%) treatments, respectively. Notably, ChatGPT and Bard encouraged the use of non-recommended treatments in 30% and 60% of queries, respectively. There were no differences in performance when evaluating by joint or by recommended versus non-recommended treatments. Studies were referenced in 6 (30%) of the Bard responses and none (0%) of the ChatGPT responses. Of the 6 Bard responses, studies could only be identified for 1 (16.7%). Of the remaining, 2 (33.3%) responses cited studies in journals that did not exist, 2 (33.3%) cited studies that could not be found with the information given, and 1 (16.7%) provided links to unrelated studies. CONCLUSIONS Both ChatGPT and Bard do not consistently provide responses that align with the AAOS CPGs. Consequently, physicians and patients should temper expectations on the guidance AI platforms can currently provide.
Collapse
Affiliation(s)
- JaeWon Yang
- Department of Orthopaedic Surgery, University of Washington, Seattle, Washington
| | - Kyle S Ardavanis
- Department of Orthopaedic Surgery, Madigan Medical Center, Tacoma, Washington
| | - Katherine E Slack
- Elson S. Floyd College of Medicine, Washington State University, Spokane, Washington
| | - Navin D Fernando
- Department of Orthopaedic Surgery, University of Washington, Seattle, Washington
| | - Craig J Della Valle
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, Illinois
| | - Nicholas M Hernandez
- Department of Orthopaedic Surgery, University of Washington, Seattle, Washington
| |
Collapse
|
7
|
Cittée T. L’IA, une innovation incontournable dans le monde de la santé ? SOINS; LA REVUE DE REFERENCE INFIRMIERE 2024; 69:1. [PMID: 38296412 DOI: 10.1016/j.soin.2023.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Affiliation(s)
- Teddy Cittée
- Équipe mobile plaies et cicatrisation, médecine vasculaire maladies rares, hôpital européen Georges-Pompidou, AP-HP, 20-40 rue Leblanc, 75908 Paris cedex 15, France.
| |
Collapse
|
8
|
Birkun AA, Gautam A. Large Language Model-based Chatbot as a Source of Advice on First Aid in Heart Attack. Curr Probl Cardiol 2024; 49:102048. [PMID: 37640177 DOI: 10.1016/j.cpcardiol.2023.102048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 08/23/2023] [Indexed: 08/31/2023]
Abstract
The ability of the cutting-edge large language model-powered chatbots to generate human-like answers to user questions hypothetically could be utilized for providing real-time advice on first aid for witnesses of cardiovascular emergencies. This study aimed to evaluate quality of the chatbot responses to inquiries on help in heart attack. The study simulated interrogation of the new Bing chatbot (Microsoft Corporation, USA) with the "heart attack what to do" prompt coming from 3 countries, the Gambia, India and the USA. The chatbot responses (20 per country) were evaluated for congruence with the International First Aid, Resuscitation, and Education Guidelines 2020 using a checklist. For all user inquiries, the chatbot provided answers containing some guidance on first aid. However, the responses commonly left out some potentially life-saving instructions, for instance to encourage the person to stop physical activity, to take antianginal medication, or to start cardiopulmonary resuscitation for unresponsive abnormally breathing person. Mean percentage of the responses having full congruence with the checklist criteria varied from 7.3 for India to 16.8 for the USA. A quarter of responses for the Gambia and the USA, and 45.0% for India contained superfluous guidelines-inconsistent directives. The chatbot advice on help in heart attack has omissions, inaccuracies and misleading instructions, and therefore the chatbot cannot be recommended as a credible source of information on first aid. Active research and organizational efforts are needed to mitigate the risk of uncontrolled misinformation and establish measures for guaranteeing trustworthiness of the chatbot-mediated counseling.
Collapse
Affiliation(s)
- Alexei A Birkun
- Department of General Surgery, Anaesthesiology, Resuscitation and Emergency Medicine, Medical Academy named after S.I. Georgievsky of V.I. Vernadsky Crimean Federal University, Simferopol, Russian Federation.
| | - Adhish Gautam
- Regional Government Hospital; Una, Himachal Pradesh, India
| |
Collapse
|
9
|
Birkun AA, Gautam A. Large Language Model (LLM)-Powered Chatbots Fail to Generate Guideline-Consistent Content on Resuscitation and May Provide Potentially Harmful Advice. Prehosp Disaster Med 2023; 38:757-763. [PMID: 37927093 DOI: 10.1017/s1049023x23006568] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2023]
Abstract
INTRODUCTION Innovative large language model (LLM)-powered chatbots, which are extremely popular nowadays, represent potential sources of information on resuscitation for the general public. For instance, the chatbot-generated advice could be used for purposes of community resuscitation education or for just-in-time informational support of untrained lay rescuers in a real-life emergency. STUDY OBJECTIVE This study focused on assessing performance of two prominent LLM-based chatbots, particularly in terms of quality of the chatbot-generated advice on how to give help to a non-breathing victim. METHODS In May 2023, the new Bing (Microsoft Corporation, USA) and Bard (Google LLC, USA) chatbots were inquired (n = 20 each): "What to do if someone is not breathing?" Content of the chatbots' responses was evaluated for compliance with the 2021 Resuscitation Council United Kingdom guidelines using a pre-developed checklist. RESULTS Both chatbots provided context-dependent textual responses to the query. However, coverage of the guideline-consistent instructions on help to a non-breathing victim within the responses was poor: mean percentage of the responses completely satisfying the checklist criteria was 9.5% for Bing and 11.4% for Bard (P >.05). Essential elements of the bystander action, including early start and uninterrupted performance of chest compressions with adequate depth, rate, and chest recoil, as well as request for and use of an automated external defibrillator (AED), were missing as a rule. Moreover, 55.0% of Bard's responses contained plausible sounding, but nonsensical guidance, called artificial hallucinations, that create risk for inadequate care and harm to a victim. CONCLUSION The LLM-powered chatbots' advice on help to a non-breathing victim omits essential details of resuscitation technique and occasionally contains deceptive, potentially harmful directives. Further research and regulatory measures are required to mitigate risks related to the chatbot-generated misinformation of public on resuscitation.
Collapse
Affiliation(s)
- Alexei A Birkun
- Department of General Surgery, Anaesthesiology, Resuscitation and Emergency Medicine, Medical Academy named after S.I. Georgievsky of V.I. Vernadsky Crimean Federal University, Simferopol, 295051, Russian Federation
| | - Adhish Gautam
- Regional Government Hospital, Una (H.P.), 174303, India
| |
Collapse
|
10
|
Luo X, Estill J, Chen Y. The use of ChatGPT in medical research: do we need a reporting guideline? Int J Surg 2023; 109:3750-3751. [PMID: 37707517 PMCID: PMC10720843 DOI: 10.1097/js9.0000000000000737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 08/23/2023] [Indexed: 09/15/2023]
Affiliation(s)
- Xufei Luo
- Evidence-Based Medicine Center
- Research Unit of Evidence-Based Evaluation and Guidelines, Chinese Academy of Medical Sciences (2021RU017)
- WHO Collaborating Centre for Guideline Implementation and Knowledge Translation
- Institute of Health Data Science, Lanzhou University, Lanzhou, People’s Republic of China
| | - Janne Estill
- Institute of Global Health, University of Geneva, Geneve, Switzerland
| | - Yaolong Chen
- Evidence-Based Medicine Center
- Research Unit of Evidence-Based Evaluation and Guidelines, Chinese Academy of Medical Sciences (2021RU017)
- WHO Collaborating Centre for Guideline Implementation and Knowledge Translation
- Institute of Health Data Science, Lanzhou University, Lanzhou, People’s Republic of China
| |
Collapse
|
11
|
Kassab J, Kapadia V, Massad C, Sarraju A, Ramchand J, Kapadia SR, Harb SC. Comparative Analysis of Chat-Based Artificial Intelligence Models in Addressing Common and Challenging Valvular Heart Disease Clinical Scenarios. J Am Heart Assoc 2023; 12:e031787. [PMID: 37982246 PMCID: PMC10727287 DOI: 10.1161/jaha.123.031787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 10/27/2023] [Indexed: 11/21/2023]
Affiliation(s)
- Joseph Kassab
- Heart, Vascular and Thoracic InstituteCleveland Clinic FoundationClevelandOHUSA
| | - Vishwum Kapadia
- Heart, Vascular and Thoracic InstituteCleveland Clinic FoundationClevelandOHUSA
| | - Christopher Massad
- Heart, Vascular and Thoracic InstituteCleveland Clinic FoundationClevelandOHUSA
| | - Ashish Sarraju
- Heart, Vascular and Thoracic InstituteCleveland Clinic FoundationClevelandOHUSA
| | - Jay Ramchand
- Heart, Vascular and Thoracic InstituteCleveland Clinic FoundationClevelandOHUSA
| | - Samir R. Kapadia
- Heart, Vascular and Thoracic InstituteCleveland Clinic FoundationClevelandOHUSA
| | - Serge C. Harb
- Heart, Vascular and Thoracic InstituteCleveland Clinic FoundationClevelandOHUSA
| |
Collapse
|