1
|
Kulasegaram K, Archibald D, Bartman I, Chahine S, Kirpalani A, Wilson C, Ross B, Cameron E, Hogenbirk J, Barber C, Burgess R, Katsoulas E, Touchie C, Grierson L. Can all roads lead to competency? School levels effects in Licensing examinations scores. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2025; 30:37-52. [PMID: 39636529 DOI: 10.1007/s10459-024-10398-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 11/17/2024] [Indexed: 12/07/2024]
Abstract
At the foundation of research concerned with professional training is the idea of an assumed causal chain between the policies and practices of education and the eventual behaviours of those that graduate these programs. In medicine, given the social accountability to ensure that teaching and learning gives way to a health human resource that is willing and able to provide the healthcare that patients and communities need, it is of critical importance to generate evidence regarding this causal relationship. One question that medical education scholars ask regularly is the degree to which the unique features of training programs and learning environments impact trainee achievement of the intended learning outcomes. To date, this evidence has been difficult to generate because data pertaining to learners is only rarely systematically brought together across institutions or periods of training. We describe new research which leverages an inter-institutional data-driven approach to investigate the influence of school-level factors on the licensing outcomes of medical students. Specifically, we bring together sociodemographic, admissions, and in-training assessment variables pertaining to medical trainee graduates at each of the six medical schools in Ontario, Canada into multilevel stepwise regression models that determine the degree of association between these variables and graduate performances on the Medical Council of Canada Qualifying Examinations (Part 1, n = 1097 observations; Part 2, n = 616 observations), established predictors of downstream physician performance. As part of this analysis, we include an anonymized school-level (School 1, School 2) independent variable in each of these models. Our results demonstrate that the largest variable associated with performance on both the first and second parts of the licensing examinations is prior academic achievement, notably clerkship performance. Ratings of biomedical knowledge were also significantly associated with the first examination, while clerkship OSCE scores and enrollment in a family medicine residency were significantly associated with the Part 2. Small significant school effects were realized in both models accounting for 4% and 2% of the variance realized in the first and second examinations, respectively. These findings highlight that school enrollment plays a minor role relative to individual student performance in influencing examination outcomes.
Collapse
Affiliation(s)
- Kulamakan Kulasegaram
- Wilson Centre, University of Toronto, Toronto, Canada.
- Department of Family & Community Medicine Temerty Faculty of Medicine, University of Toronto, Toronto, Canada.
| | - Douglas Archibald
- Department of Family Medicine, University of Ottawa, Ottawa, Canada
- Bruyère Research Institute, Ottawa, Canada
| | | | | | - Amrit Kirpalani
- Division of Paediatric Surgery, London Health Sciences Centre, London, Canada
- Department of Pediatrics, Schulich School of Medicine & Dentistry, Western University, London, ON, Canada
- Department of Pediatrics, London Health Sciences Centre, London, ON, Canada
| | - Claire Wilson
- Division of Paediatric Surgery, London Health Sciences Centre, London, Canada
| | - Brian Ross
- Northern Ontario School of Medicine University, Thunder Bay, Canada
| | - Erin Cameron
- Northern Ontario School of Medicine University, Thunder Bay, Canada
- Dr. Gilles Arcand Centre for Health Equity, NOSM University, Thunder Bay, Canada
| | - John Hogenbirk
- Centre for Rural and Northern Health Research, Laurentian University, Sudbury, Canada
| | - Cassandra Barber
- Department of Family Medicine, McMaster University, Hamilton, Canada
- School of Health Profession Education, Faculty of Health Medicine & Life Sciences, Maastricht University, Maastricht, Netherlands
| | - Raquel Burgess
- Department of Family Medicine, McMaster University, Hamilton, Canada
- Department of Social and Behavioral Sciences, Yale School of Public Health, Yale University, New Haven, USA
| | | | - Claire Touchie
- Departments of Medicine and of Innovation in Medical Education, University of Ottawa, Ottawa, Canada
- Ottawa Hospital, Ottawa, Canada
| | - Lawrence Grierson
- Department of Family Medicine, McMaster University, Hamilton, Canada
- McMaster Education Research, Innovation, and Theory Program, McMaster University, 200 Elizabeth St, Toronto, ON, M5G 2C4, Canada
| |
Collapse
|
2
|
Abstract
Surgical education has seen immense change recently. Increased demand for iterative evaluation of trainees from medical school to independent practice has led to the generation of an overwhelming amount of data related to an individual's competency. Artificial intelligence has been proposed as a solution to automate and standardize the ability of stakeholders to assess the technical and nontechnical abilities of a surgical trainee. In both the simulation and clinical environments, evidence supports the use of machine learning algorithms to both evaluate trainee skill and provide real-time and automated feedback, enabling a shortened learning curve for many key procedural skills and ensuring patient safety.
Collapse
Affiliation(s)
- Mitchell G Goldenberg
- Catherine & Joseph Aresty Department of Urology, USC Institute of Urology, University of Southern California, 1441 Eastlake Avenue, Suite 7416, Los Angeles, CA 90033, USA.
| |
Collapse
|
3
|
Jacobparayil A, Ali H, Pomeroy B, Baronia R, Chavez M, Ibrahim Y. Predictors of Performance on the United States Medical Licensing Examination Step 2 Clinical Knowledge: A Systematic Literature Review. Cureus 2022; 14:e22280. [PMID: 35350504 PMCID: PMC8933259 DOI: 10.7759/cureus.22280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/16/2022] [Indexed: 11/05/2022] Open
|
4
|
Ellis R, Brennan PA, Scrimgeour DSG, Lee AJ, Cleland J. Does performance at the intercollegiate Membership of the Royal Colleges of Surgeons (MRCS) examination vary according to UK medical school and course type? A retrospective cohort study. BMJ Open 2022; 12:e054616. [PMID: 34987044 PMCID: PMC8734024 DOI: 10.1136/bmjopen-2021-054616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Accepted: 12/02/2021] [Indexed: 12/03/2022] Open
Abstract
OBJECTIVES The knowledge, skills and behaviours required of new UK medical graduates are the same but how these are achieved differs given medical schools vary in their mission, curricula and pedagogy. Medical school differences seem to influence performance on postgraduate assessments. To date, the relationship between medical schools, course types and performance at the Membership of the Royal Colleges of Surgeons examination (MRCS) has not been investigated. Understanding this relationship is vital to achieving alignment across undergraduate and postgraduate training, learning and assessment values. DESIGN AND PARTICIPANTS A retrospective longitudinal cohort study of UK medical graduates who attempted MRCS Part A (n=9730) and MRCS Part B (n=4645) between 2007 and 2017, using individual-level linked sociodemographic and prior academic attainment data from the UK Medical Education Database. METHODS We studied MRCS performance across all UK medical schools and examined relationships between potential predictors and MRCS performance using χ2 analysis. Multivariate logistic regression models identified independent predictors of MRCS success at first attempt. RESULTS MRCS pass rates differed significantly between individual medical schools (p<0.001) but not after adjusting for prior A-Level performance. Candidates from courses other than those described as problem-based learning (PBL) were 53% more likely to pass MRCS Part A (OR 1.53 (95% CI 1.25 to 1.87) and 54% more likely to pass Part B (OR 1.54 (1.05 to 2.25)) at first attempt after adjusting for prior academic performance. Attending a Standard-Entry 5-year medicine programme, having no prior degree and attending a Russell Group university were independent predictors of MRCS success in regression models (p<0.05). CONCLUSIONS There are significant differences in MRCS performance between medical schools. However, this variation is largely due to individual factors such as academic ability, rather than medical school factors. This study also highlights group level attainment differences that warrant further investigation to ensure equity within medical training.
Collapse
Affiliation(s)
- Ricky Ellis
- University of Aberdeen Institute of Applied Health Sciences, Aberdeen, UK
- Urology Department, Nottingham University Hospitals NHS Trust, Nottingham, UK
| | - Peter A Brennan
- Department of Maxillo-Facial Surgery, Queen Alexandra Hospital, Portsmouth, UK
| | - Duncan S G Scrimgeour
- University of Aberdeen Institute of Applied Health Sciences, Aberdeen, UK
- Department of Colorectal Surgery, Aberdeen Royal Infirmary, Aberdeen, UK
| | - Amanda J Lee
- Medical Statistics Team, University of Aberdeen Institute of Applied Health Sciences, Aberdeen, UK
| | - Jennifer Cleland
- Medical Education Research and Scholarship Unit (MERSU), Lee Kong Chian School of Medicine, Singapore
| |
Collapse
|
5
|
Burk-Rafel J, Reinstein I, Feng J, Kim MB, Miller LH, Cocks PM, Marin M, Aphinyanaphongs Y. Development and Validation of a Machine Learning-Based Decision Support Tool for Residency Applicant Screening and Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2021; 96:S54-S61. [PMID: 34348383 DOI: 10.1097/acm.0000000000004317] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Residency programs face overwhelming numbers of residency applications, limiting holistic review. Artificial intelligence techniques have been proposed to address this challenge but have not been created. Here, a multidisciplinary team sought to develop and validate a machine learning (ML)-based decision support tool (DST) for residency applicant screening and review. METHOD Categorical applicant data from the 2018, 2019, and 2020 residency application cycles (n = 8,243 applicants) at one large internal medicine residency program were downloaded from the Electronic Residency Application Service and linked to the outcome measure: interview invitation by human reviewers (n = 1,235 invites). An ML model using gradient boosting was designed using training data (80% of applicants) with over 60 applicant features (e.g., demographics, experiences, academic metrics). Model performance was validated on held-out data (20% of applicants). Sensitivity analysis was conducted without United States Medical Licensing Examination (USMLE) scores. An interactive DST incorporating the ML model was designed and deployed that provided applicant- and cohort-level visualizations. RESULTS The ML model areas under the receiver operating characteristic and precision recall curves were 0.95 and 0.76, respectively; these changed to 0.94 and 0.72, respectively, with removal of USMLE scores. Applicants' medical school information was an important driver of predictions-which had face validity based on the local selection process-but numerous predictors contributed. Program directors used the DST in the 2021 application cycle to select 20 applicants for interview that had been initially screened out during human review. CONCLUSIONS The authors developed and validated an ML algorithm for predicting residency interview offers from numerous application elements with high performance-even when USMLE scores were removed. Model deployment in a DST highlighted its potential for screening candidates and helped quantify and mitigate biases existing in the selection process. Further work will incorporate unstructured textual data through natural language processing methods.
Collapse
Affiliation(s)
- Jesse Burk-Rafel
- J. Burk-Rafel is assistant professor of medicine and assistant director of UME-GME innovation, Institute for Innovations in Medical Education, NYU Grossman School of Medicine, New York, New York. At the time this work was completed, he was an internal medicine resident at NYU Langone Health, New York, New York; ORCID: https://orcid.org/0000-0003-3785-2154
| | - Ilan Reinstein
- I. Reinstein is a research scientist, Institute for Innovations in Medical Education, NYU Grossman School of Medicine, New York, New York
| | - James Feng
- J. Feng is an orthopedic surgery resident, Beaumont Health, Royal Oak, Michigan. At the time this work was completed, he was a master's student in biomedical informatics, NYU Grossman School of Medicine Vilcek Institute of Graduate Biomedical Sciences, New York, New York
| | - Moosun Brad Kim
- M.B. Kim is a biostatistician at Aprogen, Seongnam, Republic of Korea. At the time this work was completed, he was a master's student in biomedical informatics, NYU Grossman School of Medicine Vilcek Institute of Graduate Biomedical Sciences, New York, New York
| | - Louis H Miller
- L.H. Miller is assistant professor of cardiology and assistant dean for career advisement, Zucker School of Medicine at Hofstra/Northwell, New York, New York
| | - Patrick M Cocks
- P.M. Cocks is the Abraham Sunshine Assistant Professor of Medicine, and program director of the internal medicine residency program, NYU Langone Health, New York, New York
| | - Marina Marin
- M. Marin is director of the division of educational analytics, Institute for Innovations in Medical Education, NYU Grossman School of Medicine, New York, New York
| | - Yindalon Aphinyanaphongs
- Y. Aphinyanaphongs is director of operational data science and machine learning, NYU Langone Health, New York, New York
| |
Collapse
|
6
|
Ganjoo R, Schwartz L, Boss M, McHarg M, Dobrydneva Y. Predictors of success on the MCAT among post-baccalaureate pre-medicine students. Heliyon 2020; 6:e03778. [PMID: 32337381 PMCID: PMC7177007 DOI: 10.1016/j.heliyon.2020.e03778] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Revised: 03/15/2020] [Accepted: 04/09/2020] [Indexed: 11/22/2022] Open
Abstract
Post-baccalaureate pre-medicine programs (PBPMP) provide prerequisite coursework for non-life science majors who aspire to become physicians. Students entering these programs generally do not have previous college-level exposure to the natural sciences. This pilot study was conducted to determine characteristics of scientifically naive, career changer, pre-medical students that may be used by PBPMP admissions committees. Statistical analyses were performed between Medical College Admission Test (MCAT) scores and student gender, Scholastic Aptitude Test (SAT) scores, undergraduate field of study, and undergraduate Grade Point Average (GPA). While relationships between certain subscores on the SAT and MCAT were found, data suggest that other non-quantitative metrics be considered as predictors of performance among PBPMP students.
Collapse
Affiliation(s)
- Rohini Ganjoo
- Department of Biomedical Laboratory Sciences, George Washington University School of Medicine and Health Sciences, Enterprise Hall, 44983 Knoll Square, Ashburn, Virginia, USA.,Department of Physician Assistant Sciences, George Washington University School of Medicine and Health Sciences, Enterprise Hall, 44983 Knoll Square, Ashburn, Virginia, USA
| | - Lisa Schwartz
- Department of Biomedical Laboratory Sciences, George Washington University School of Medicine and Health Sciences, Enterprise Hall, 44983 Knoll Square, Ashburn, Virginia, USA
| | - Mackenzie Boss
- Department of Physician Assistant Sciences, George Washington University School of Medicine and Health Sciences, Enterprise Hall, 44983 Knoll Square, Ashburn, Virginia, USA
| | - Matthew McHarg
- Department of Physician Assistant Sciences, George Washington University School of Medicine and Health Sciences, Enterprise Hall, 44983 Knoll Square, Ashburn, Virginia, USA
| | - Yuliya Dobrydneva
- Department of Biomedical Laboratory Sciences, George Washington University School of Medicine and Health Sciences, Enterprise Hall, 44983 Knoll Square, Ashburn, Virginia, USA
| |
Collapse
|