1
|
Duan S, Liu C, Rong T, Zhao Y, Liu B. Integrating AI in medical education: a comprehensive study of medical students' attitudes, concerns, and behavioral intentions. BMC MEDICAL EDUCATION 2025; 25:599. [PMID: 40269824 PMCID: PMC12020173 DOI: 10.1186/s12909-025-07177-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2024] [Accepted: 04/14/2025] [Indexed: 04/25/2025]
Abstract
BACKGROUND To analyze medical students' perceptions, trust, and attitudes toward artificial intelligence (AI) in medical education, and explore their willingness to integrate AI in learning and teaching practices. METHODS This cross-sectional study was performed with undergraduate and postgraduate medical students from two medical universities in Beijing. Data were collected between October and early November 2024 via a self-designed questionnaire that covered seven main domains: Awareness of AI, Expectations and concerns about AI, Importance of AI in education, Potential challenges and risks of AI in education and learning, The role and potential of AI in education, Perceptions of generative AI, and Behavioral intentions and plans for AI use in medical education. RESULTS A total of 586 students participated in the survey, 553 valid responses were collected, giving an effective response rate of 94.4%. The majority of participants reported familiarity with AI concepts, whereas only 43.5% had an understanding of AI applications specific to medical education. Postgraduate students exhibited significantly higher levels of awareness of AI tools in medical contexts compared with undergraduate students (p < 0.001). Gender differences were also observed, with male students showing more enthusiasm and higher engagement with AI technologies than female students (p < 0.001). Female students expressed greater concerns regarding privacy, data security, and potential ethical issues related to AI in medical education than male students (p < 0.05). Male students or postgraduate students showed stronger behavioral intentions to integrate AI tools in their future learning and teaching practices. CONCLUSIONS Medical students exhibit optimistic yet cautious attitudes toward the application of AI in medical education. They acknowledge the potential of AI to enhance educational efficiency, but remain mindful of the associated privacy and ethical risks. Strengthening AI education and training and balancing technological advancements with ethical considerations will be crucial in facilitating the deep integration of AI in medical education. TRIAL REGISTRATION Not clinical trial.
Collapse
Affiliation(s)
- Shuo Duan
- Beijing Tiantan Hospital, Capital Medical University, No. 119 South 4Th Ring West Road, Fengtai District, Beijing, 100070, China
| | - Chunyu Liu
- Peking University Peoples Hospital, Beijing, 100044, China
| | - Tianhua Rong
- Beijing Tiantan Hospital, Capital Medical University, No. 119 South 4Th Ring West Road, Fengtai District, Beijing, 100070, China
| | - Yixin Zhao
- Peking University Peoples Hospital, Beijing, 100044, China.
| | - Baoge Liu
- Beijing Tiantan Hospital, Capital Medical University, No. 119 South 4Th Ring West Road, Fengtai District, Beijing, 100070, China.
| |
Collapse
|
2
|
Maino AP, Klikowski J, Strong B, Ghaffari W, Woźniak M, Bourcier T, Grzybowski A. Artificial Intelligence vs. Human Cognition: A Comparative Analysis of ChatGPT and Candidates Sitting the European Board of Ophthalmology Diploma Examination. Vision (Basel) 2025; 9:31. [PMID: 40265399 PMCID: PMC12015923 DOI: 10.3390/vision9020031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2025] [Revised: 04/07/2025] [Accepted: 04/07/2025] [Indexed: 04/24/2025] Open
Abstract
BACKGROUND/OBJECTIVES This paper aims to assess ChatGPT's performance in answering European Board of Ophthalmology Diploma (EBOD) examination papers and to compare these results to pass benchmarks and candidate results. METHODS This cross-sectional study used a sample of past exam papers from 2012, 2013, 2020-2023 EBOD examinations. This study analyzed ChatGPT's responses to 440 multiple choice questions (MCQs), each containing five true/false statements (2200 statements in total) and 48 single best answer (SBA) questions. RESULTS ChatGPT, for MCQs, scored on average 64.39%. ChatGPT's strongest metric performance for MCQs was precision (68.76%). ChatGPT performed best at answering pathology MCQs (Grubbs test p < 0.05). Optics and refraction had the lowest-scoring MCQ performance across all metrics. ChatGPT-3.5 Turbo performed worse than human candidates and ChatGPT-4o on easy questions (75% vs. 100% accuracy) but outperformed humans and ChatGPT-4o on challenging questions (50% vs. 28% accuracy). ChatGPT's SBA performance averaged 28.43%, with the highest score and strongest performance in precision (29.36%). Pathology SBA questions were consistently the lowest-scoring topic across most metrics. ChatGPT demonstrated a nonsignificant tendency to select option 1 more frequently (p = 0.19). When answering SBAs, human candidates scored higher than ChatGPT in all metric areas measured. CONCLUSIONS ChatGPT performed stronger for true/false questions, scoring a pass mark in most instances. Performance was poorer for SBA questions, suggesting that ChatGPT's ability in information retrieval is better than that in knowledge integration. ChatGPT could become a valuable tool in ophthalmic education, allowing exam boards to test their exam papers to ensure they are pitched at the right level, marking open-ended questions and providing detailed feedback.
Collapse
Affiliation(s)
- Anna P. Maino
- Manchester Royal Eye Hospital, Manchester M13 9WL, UK;
| | - Jakub Klikowski
- Department of Systems and Computer Networks, Wrocław University of Science and Technology, 50-370 Wrocław, Poland; (J.K.); (M.W.)
| | - Brendan Strong
- European Board of Ophthalmology Examination Headquarters, RP56PT10 Kilcullen, Ireland;
| | - Wahid Ghaffari
- Department of Medical Education, Stockport NHS Foundation Trust, Stockport SK2 7JE, UK;
| | - Michał Woźniak
- Department of Systems and Computer Networks, Wrocław University of Science and Technology, 50-370 Wrocław, Poland; (J.K.); (M.W.)
| | - Tristan Bourcier
- Department of Ophthalmology, University of Strasbourg, 67081 Strasbourg, France;
| | | |
Collapse
|
3
|
Ali M. The Role of AI in Reshaping Medical Education: Opportunities and Challenges. CLINICAL TEACHER 2025; 22:e70040. [PMID: 39956546 DOI: 10.1111/tct.70040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2024] [Revised: 01/08/2025] [Accepted: 01/12/2025] [Indexed: 02/18/2025]
Abstract
Artificial intelligence (AI) is redefining medical education, bringing new dimensions of personalized learning, enhanced visualization and simulation-based clinical training to the forefront. Additionally, AI-powered simulations offer realistic, immersive training opportunities, preparing students for complex clinical situations and fostering interprofessional collaboration skills essential for modern healthcare. However, the integration of AI into medical education presents challenges, particularly around ethical considerations, skill atrophy due to overreliance and the exacerbation of the digital divide among educational institutions. Addressing these challenges demands a balanced approach that includes blended learning models, digital literacy and faculty development to ensure AI serves as a supplement to, rather than a replacement for, core medical competencies. As medical education evolves alongside AI, institutions must prioritize strategies that preserve human-centred skills while advancing technological innovation to prepare future healthcare professionals for an AI-enhanced landscape.
Collapse
Affiliation(s)
- Majid Ali
- Department of Basic Sciences, College of Medicine, Sulaiman Al-Rajhi University, Al-Bukayriyah, Saudi Arabia
- Department of Clinical Pharmacy and Pharmacy Practice, Faculty of Pharmacy, Universiti Malaya, Kuala Lumpur, Malaysia
| |
Collapse
|
4
|
Almarzouki AF, Alem A, Shrourou F, Kaki S, Khushi M, Mutawakkil A, Bamabad M, Fakharani N, Alshehri M, Binibrahim M. Assessing the disconnect between student interest and education in artificial intelligence in medicine in Saudi Arabia. BMC MEDICAL EDUCATION 2025; 25:150. [PMID: 39881303 PMCID: PMC11780997 DOI: 10.1186/s12909-024-06446-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Accepted: 12/03/2024] [Indexed: 01/31/2025]
Abstract
BACKGROUND Although artificial intelligence (AI) has gained increasing attention for its potential future impact on clinical practice, medical education has struggled to stay ahead of the developing technology. The question of whether medical education is fully preparing trainees to adapt to potential changes from AI technology in clinical practice remains unanswered, and the influence of AI on medical students' career preferences remains unclear. Understanding the gap between students' interest in and knowledge of AI may help inform the medical curriculum structure. METHODS A total of 354 medical students were surveyed to investigate their knowledge of, exposure to, and interest in the role of AI in health care. Students were questioned about the anticipated impact of AI on medical specialties and their career preferences. RESULTS Most students (65%) were interested in the role of AI in medicine, but only 23% had received formal education in AI based on reliable scientific resources. Despite their interest and willingness to learn, only 20.1% of students reported that their school offered resources enabling them to explore the use of AI in medicine. They relied mainly on informal information sources, including social media, and few students understood fundamental AI concepts or could cite clinically relevant AI research. Students who cited more scientific primary sources (rather than online media) exhibited significantly higher self-reported understanding of AI concepts in the context of medicine. Interestingly, students who had received more exposure to AI courses reported higher levels of skepticism regarding AI and were less eager to learn more about it. Radiology and pathology were perceived to be the fields most strongly affected by AI. Students reported that their overall choice of specialty was not impacted by AI. CONCLUSION Formal AI education seems inadequate despite students' enthusiasm concerning the application of such technology in clinical practice. Medical curricula should evolve to promote structured, evidence-based AI literacy to enable students to understand the potential applications of AI in health care.
Collapse
Affiliation(s)
- Abeer F Almarzouki
- Clinical Physiology Department, Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia.
| | - Alwaleed Alem
- Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Faris Shrourou
- Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Suhail Kaki
- Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Mohammed Khushi
- Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia
| | | | - Motasem Bamabad
- Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Nawaf Fakharani
- Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Mohammed Alshehri
- Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia
| | | |
Collapse
|
5
|
Lermann Henestrosa A, Kimmerle J. Understanding and Perception of Automated Text Generation among the Public: Two Surveys with Representative Samples in Germany. Behav Sci (Basel) 2024; 14:353. [PMID: 38785844 PMCID: PMC11118015 DOI: 10.3390/bs14050353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 04/19/2024] [Accepted: 04/22/2024] [Indexed: 05/25/2024] Open
Abstract
Automated text generation (ATG) technology has evolved rapidly in the last several years, enabling the spread of content produced by artificial intelligence (AI). In addition, with the release of ChatGPT, virtually everyone can now create naturally sounding text on any topic. To optimize future use and understand how humans interact with these technologies, it is essential to capture people's attitudes and beliefs. However, research on ATG perception is lacking. Based on two representative surveys (March 2022: n1 = 1028; July 2023: n2 = 1013), we aimed to examine the German population's concepts of and attitudes toward AI authorship. The results revealed a preference for human authorship across a wide range of topics and a lack of knowledge concerning the function, data sources, and responsibilities of ATG. Using multiple regression analysis with k-fold cross-validation, we identified people's attitude toward using ATG, performance expectancy, general attitudes toward AI, and lay attitude toward ChatGPT and ATG as significant predictors of the intention to read AI-written texts in the future. Despite the release of ChatGPT, we observed stability across most variables and minor differences between the two survey points regarding concepts about ATG. We discuss the findings against the backdrop of the ever-increasing availability of automated content and the need for an intensive societal debate about its chances and limitations.
Collapse
Affiliation(s)
| | - Joachim Kimmerle
- Knowledge Construction Lab, Leibniz-Institut für Wissensmedien, 72076 Tübingen, Germany;
- Department of Psychology, Eberhard Karls University, 72076 Tübingen, Germany
| |
Collapse
|
6
|
Laupichler MC, Aster A, Meyerheim M, Raupach T, Mergen M. Medical students' AI literacy and attitudes towards AI: a cross-sectional two-center study using pre-validated assessment instruments. BMC MEDICAL EDUCATION 2024; 24:401. [PMID: 38600457 PMCID: PMC11007897 DOI: 10.1186/s12909-024-05400-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 04/08/2024] [Indexed: 04/12/2024]
Abstract
BACKGROUND Artificial intelligence (AI) is becoming increasingly important in healthcare. It is therefore crucial that today's medical students have certain basic AI skills that enable them to use AI applications successfully. These basic skills are often referred to as "AI literacy". Previous research projects that aimed to investigate medical students' AI literacy and attitudes towards AI have not used reliable and validated assessment instruments. METHODS We used two validated self-assessment scales to measure AI literacy (31 Likert-type items) and attitudes towards AI (5 Likert-type items) at two German medical schools. The scales were distributed to the medical students through an online questionnaire. The final sample consisted of a total of 377 medical students. We conducted a confirmatory factor analysis and calculated the internal consistency of the scales to check whether the scales were sufficiently reliable to be used in our sample. In addition, we calculated t-tests to determine group differences and Pearson's and Kendall's correlation coefficients to examine associations between individual variables. RESULTS The model fit and internal consistency of the scales were satisfactory. Within the concept of AI literacy, we found that medical students at both medical schools rated their technical understanding of AI significantly lower (MMS1 = 2.85 and MMS2 = 2.50) than their ability to critically appraise (MMS1 = 4.99 and MMS2 = 4.83) or practically use AI (MMS1 = 4.52 and MMS2 = 4.32), which reveals a discrepancy of skills. In addition, female medical students rated their overall AI literacy significantly lower than male medical students, t(217.96) = -3.65, p <.001. Students in both samples seemed to be more accepting of AI than fearful of the technology, t(745.42) = 11.72, p <.001. Furthermore, we discovered a strong positive correlation between AI literacy and positive attitudes towards AI and a weak negative correlation between AI literacy and negative attitudes. Finally, we found that prior AI education and interest in AI is positively correlated with medical students' AI literacy. CONCLUSIONS Courses to increase the AI literacy of medical students should focus more on technical aspects. There also appears to be a correlation between AI literacy and attitudes towards AI, which should be considered when planning AI courses.
Collapse
Affiliation(s)
- Matthias Carl Laupichler
- Institute of Medical Education, University Hospital Bonn, Venusberg Campus 1, 53127, Bonn, Germany.
| | - Alexandra Aster
- Institute of Medical Education, University Hospital Bonn, Venusberg Campus 1, 53127, Bonn, Germany
| | - Marcel Meyerheim
- Department of Pediatric Oncology and Hematology, Faculty of Medicine, Saarland University, Homburg, Germany
| | - Tobias Raupach
- Institute of Medical Education, University Hospital Bonn, Venusberg Campus 1, 53127, Bonn, Germany
| | - Marvin Mergen
- Department of Pediatric Oncology and Hematology, Faculty of Medicine, Saarland University, Homburg, Germany
| |
Collapse
|
7
|
Hudon A, Kiepura B, Pelletier M, Phan V. Using ChatGPT in Psychiatry to Design Script Concordance Tests in Undergraduate Medical Education: Mixed Methods Study. JMIR MEDICAL EDUCATION 2024; 10:e54067. [PMID: 38596832 PMCID: PMC11007379 DOI: 10.2196/54067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Revised: 03/06/2024] [Accepted: 03/07/2024] [Indexed: 04/11/2024]
Abstract
Background Undergraduate medical studies represent a wide range of learning opportunities served in the form of various teaching-learning modalities for medical learners. A clinical scenario is frequently used as a modality, followed by multiple-choice and open-ended questions among other learning and teaching methods. As such, script concordance tests (SCTs) can be used to promote a higher level of clinical reasoning. Recent technological developments have made generative artificial intelligence (AI)-based systems such as ChatGPT (OpenAI) available to assist clinician-educators in creating instructional materials. Objective The main objective of this project is to explore how SCTs generated by ChatGPT compared to SCTs produced by clinical experts on 3 major elements: the scenario (stem), clinical questions, and expert opinion. Methods This mixed method study evaluated 3 ChatGPT-generated SCTs with 3 expert-created SCTs using a predefined framework. Clinician-educators as well as resident doctors in psychiatry involved in undergraduate medical education in Quebec, Canada, evaluated via a web-based survey the 6 SCTs on 3 criteria: the scenario, clinical questions, and expert opinion. They were also asked to describe the strengths and weaknesses of the SCTs. Results A total of 102 respondents assessed the SCTs. There were no significant distinctions between the 2 types of SCTs concerning the scenario (P=.84), clinical questions (P=.99), and expert opinion (P=.07), as interpretated by the respondents. Indeed, respondents struggled to differentiate between ChatGPT- and expert-generated SCTs. ChatGPT showcased promise in expediting SCT design, aligning well with Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition criteria, albeit with a tendency toward caricatured scenarios and simplistic content. Conclusions This study is the first to concentrate on the design of SCTs supported by AI in a period where medicine is changing swiftly and where technologies generated from AI are expanding much faster. This study suggests that ChatGPT can be a valuable tool in creating educational materials, and further validation is essential to ensure educational efficacy and accuracy.
Collapse
Affiliation(s)
- Alexandre Hudon
- Department of Psychiatry and Addictology, University of Montreal, Montreal, QC, Canada
| | - Barnabé Kiepura
- Department of Psychiatry and Addictology, University of Montreal, Montreal, QC, Canada
| | | | - Véronique Phan
- Department of Pediatrics, Université de Montréal, Montreal, QC, Canada
| |
Collapse
|
8
|
Heredia-Negrón F, Tosado-Rodríguez EL, Meléndez-Berrios J, Nieves B, Amaya-Ardila CP, Roche-Lima A. Assessing the Impact of AI Education on Hispanic Healthcare Professionals' Perceptions and Knowledge. EDUCATION SCIENCES 2024; 14:339. [PMID: 38818527 PMCID: PMC11138866 DOI: 10.3390/educsci14040339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/01/2024]
Abstract
This study investigates the awareness and perceptions of artificial intelligence (AI) among Hispanic healthcare-related professionals, focusing on integrating AI in healthcare. The study participants were recruited from an asynchronous course offered twice within a year at the University of Puerto Rico Medical Science Campus, titled "Artificial Intelligence and Machine Learning Applied to Health Disparities Research", which aimed to bridge the gaps in AI knowledge among participants. The participants were divided into Experimental (n = 32; data-illiterate) and Control (n = 18; data-literate) groups, and pre-test and post-test surveys were administered to assess knowledge and attitudes toward AI. Descriptive statistics, power analysis, and the Mann-Whitney U test were employed to determine the influence of the course on participants' comprehension and perspectives regarding AI. Results indicate significant improvements in knowledge and attitudes among participants, emphasizing the effectiveness of the course in enhancing understanding and fostering positive attitudes toward AI. Findings also reveal limited practical exposure to AI applications, highlighting the need for improved integration into education. This research highlights the significance of educating healthcare professionals about AI to enable its advantageous incorporation into healthcare procedures. The study provides valuable perspectives from a broad spectrum of healthcare workers, serving as a basis for future investigations and educational endeavors aimed at AI implementation in healthcare.
Collapse
Affiliation(s)
- Frances Heredia-Negrón
- CCRHD RCMI-Program, Medical Sciences Campus, University of Puerto Rico, San Juan, PR 00934, USA
| | | | - Joshua Meléndez-Berrios
- CCRHD RCMI-Program, Medical Sciences Campus, University of Puerto Rico, San Juan, PR 00934, USA
| | - Brenda Nieves
- CCRHD RCMI-Program, Medical Sciences Campus, University of Puerto Rico, San Juan, PR 00934, USA
| | - Claudia P. Amaya-Ardila
- Department of Biostatistics and Epidemiology, Medical Science Campus, University of Puerto Rico, San Juan, PR 00934, USA
| | - Abiel Roche-Lima
- CCRHD RCMI-Program, Medical Sciences Campus, University of Puerto Rico, San Juan, PR 00934, USA
| |
Collapse
|
9
|
Waikel RL, Othman AA, Patel T, Ledgister Hanchard S, Hu P, Tekendo-Ngongang C, Duong D, Solomon BD. Recognition of Genetic Conditions After Learning With Images Created Using Generative Artificial Intelligence. JAMA Netw Open 2024; 7:e242609. [PMID: 38488790 PMCID: PMC10943405 DOI: 10.1001/jamanetworkopen.2024.2609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 01/12/2024] [Indexed: 03/18/2024] Open
Abstract
Importance The lack of standardized genetics training in pediatrics residencies, along with a shortage of medical geneticists, necessitates innovative educational approaches. Objective To compare pediatric resident recognition of Kabuki syndrome (KS) and Noonan syndrome (NS) after 1 of 4 educational interventions, including generative artificial intelligence (AI) methods. Design, Setting, and Participants This comparative effectiveness study used generative AI to create images of children with KS and NS. From October 1, 2022, to February 28, 2023, US pediatric residents were provided images through a web-based survey to assess whether these images helped them recognize genetic conditions. Interventions Participants categorized 20 images after exposure to 1 of 4 educational interventions (text-only descriptions, real images, and 2 types of images created by generative AI). Main Outcomes and Measures Associations between educational interventions with accuracy and self-reported confidence. Results Of 2515 contacted pediatric residents, 106 and 102 completed the KS and NS surveys, respectively. For KS, the sensitivity of text description was 48.5% (128 of 264), which was not significantly different from random guessing (odds ratio [OR], 0.94; 95% CI, 0.69-1.29; P = .71). Sensitivity was thus compared for real images vs random guessing (60.3% [188 of 312]; OR, 1.52; 95% CI, 1.15-2.00; P = .003) and 2 types of generative AI images vs random guessing (57.0% [212 of 372]; OR, 1.32; 95% CI, 1.04-1.69; P = .02 and 59.6% [193 of 324]; OR, 1.47; 95% CI, 1.12-1.94; P = .006) (denominators differ according to survey responses). The sensitivity of the NS text-only description was 65.3% (196 of 300). Compared with text-only, the sensitivity of the real images was 74.3% (205 of 276; OR, 1.53; 95% CI, 1.08-2.18; P = .02), and the sensitivity of the 2 types of images created by generative AI was 68.0% (204 of 300; OR, 1.13; 95% CI, 0.77-1.66; P = .54) and 71.0% (247 of 328; OR, 1.30; 95% CI, 0.92-1.83; P = .14). For specificity, no intervention was statistically different from text only. After the interventions, the number of participants who reported being unsure about important diagnostic facial features decreased from 56 (52.8%) to 5 (7.6%) for KS (P < .001) and 25 (24.5%) to 4 (4.7%) for NS (P < .001). There was a significant association between confidence level and sensitivity for real and generated images. Conclusions and Relevance In this study, real and generated images helped participants recognize KS and NS; real images appeared most helpful. Generated images were noninferior to real images and could serve an adjunctive role, particularly for rare conditions.
Collapse
Affiliation(s)
- Rebekah L. Waikel
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland
| | - Amna A. Othman
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland
| | - Tanviben Patel
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland
| | | | - Ping Hu
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland
| | | | - Dat Duong
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland
| | - Benjamin D. Solomon
- Medical Genetics Branch, National Human Genome Research Institute, Bethesda, Maryland
| |
Collapse
|