1
|
Rössler L, Herrmann M, Wiegand A, Kanzow P. Usage of Multiple-Choice Items in Summative Examinations: Questionnaire Survey Among German Undergraduate Dental Training Programmes. JMIR MEDICAL EDUCATION 2024. [PMID: 38712378 DOI: 10.2196/58126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
BACKGROUND Multiple-choice examinations are frequently employed among German dental schools. However, details regarding the used item types and applied scoring methods are lacking. OBJECTIVE We aimed to gain an insight into the current usage of multiple-choice items (ie, questions) in summative examinations in German undergraduate dental training programmes. METHODS A paper-based 10-item questionnaire regarding the employed assessment methods, multiple-choice item types, and applied scoring methods was designed. The pilot-tested questionnaire was mailed to the Deans of Studies and to the Heads of Department of Operative/Restorative Dentistry at all 30 dental schools in Germany in February 2023. Statistical analysis was performed using Fisher's exact test (P<.05). RESULTS The response rate amounted to 90.0% (27/30 dental schools). All respondent dental schools employed multiple-choice examinations for summative assessments. Examinations were delivered electronically by 70.4% (19/27) of the dental schools. Almost all dental schools used single-choice Type A items (88.9%) which accounted for the largest number of items in about half of the dental schools. Further item types (eg, conventional multiple-select items, Multiple-True-False, Pick-N) were only used by fewer dental schools (≤66.7%, up to 18 out of 27 dental schools). For the multiple-select item types, the applied scoring methods varied considerably (ie, awarding [intermediate] partial credit, requirements for partial credit). Dental schools with the possibility of electronic examinations used multiple-select items slightly more often (73.7%, 14/19 vs. 50.0%, 4/8). However, this difference was statistically not significant (P=.375). Dental schools used items either individually or as key feature problems consisting of a clinical case scenario followed by a number of items focusing on critical treatment steps (55.6%, 15/27). Not a single school employed alternative testing methods (eg, answer-until-correct). A formal item review process was established at about half of the dental schools (55.6%, 15/27). CONCLUSIONS Summative assessment methods among German dental schools vary widely. Especially, a large variability regarding the use and scoring of multiple-select multiple-choice items was found. CLINICALTRIAL
Collapse
Affiliation(s)
- Lena Rössler
- Department of Preventive Dentistry, Periodontology and Cariology, University Medical Center Göttingen, Robert-Koch-Str 40, Göttingen, DE
| | - Manfred Herrmann
- Study Deanery, University Medical Center Göttingen, Göttingen, DE
| | - Annette Wiegand
- Department of Preventive Dentistry, Periodontology and Cariology, University Medical Center Göttingen, Robert-Koch-Str 40, Göttingen, DE
| | - Philipp Kanzow
- Department of Preventive Dentistry, Periodontology and Cariology, University Medical Center Göttingen, Robert-Koch-Str 40, Göttingen, DE
| |
Collapse
|
2
|
Alamoush RA, Sartawi S, Salim NA, Sawair F, Haider J, Jamani K. Exam evaluation in prosthodontics across preclinical and clinical years from students' perspective: A cross-sectional study. EUROPEAN JOURNAL OF DENTAL EDUCATION : OFFICIAL JOURNAL OF THE ASSOCIATION FOR DENTAL EDUCATION IN EUROPE 2024; 28:663-672. [PMID: 38287150 DOI: 10.1111/eje.12993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 12/12/2023] [Accepted: 01/15/2024] [Indexed: 01/31/2024]
Abstract
INTRODUCTION The purpose of this study was to explore the students' perceptions and performance in prosthodontics theory exam. METHODS A cross-sectional descriptive study was conducted on 560 (80.82%) students of different levels (third, fourth and fifth years) to explore their opinions and performance with regard to a number of issues on a prosthodontics theory exam (exam evaluation, exam preparation, exam material, exam timing). Demographic data were also collected. Descriptive statistics were generated and Chi-square test, independent sample t-test, ANOVA test and Pearson's correlation coefficient were used to examine the associations between different variables. The significance level was set at p < .05. RESULTS Students' responses regarding exam evaluation was influenced by their gender, study level, high-school Grade Point Average (GPA) and undergraduate cumulative GPA. Perceived exam difficulty was significantly affected by gender (p = .03) and study level (p < .001), and negatively correlated to both high-school GPA (p < .001) and university GPA (p = .03). The vast majority (88.2%) depended on lecture hand-outs and lecture notes for study. Exam material and preparation were not significantly affected by any of the demographic variables with most respondents (76.8%) thinking that the lectures blended with prosthodontics laboratories/clinics would improve their understanding of the exam material. The suggested best time to conduct the exam was early afternoon (31.6%). Student performance was significantly affected by the study level (p < .001) and cumulative GPA (p < .001) with significant positive correlation between the high-school GPA and the mark in the exam (r = .29, p < .001) and by the amount of time students spent for exam preparation (p < .001). Those students who reported using textbooks to prepare for the exam got significantly higher marks (66.1 ± 8.7) compared to the students who did not (62.8 ± 9.7) (p = .03). CONCLUSIONS Course level, GPA and gender were identified as the most influential factors in different aspects of exam evaluation and students' performance. Regular study and use of textbooks were demonstrated to improve academic performance. Additional orientation and guidance relating to the exam (especially for third year students) would be welcomed, as would alternate teaching methods such as small group discussions or study groups.
Collapse
Affiliation(s)
- Rasha A Alamoush
- Department of Prosthodontics, School of Dentistry, The University of Jordan Hospital, The University of Jordan, Amman, Jordan
| | - Samiha Sartawi
- Department of Prosthodontics, School of Dentistry, The University of Jordan Hospital, The University of Jordan, Amman, Jordan
| | - Nesreen A Salim
- Department of Prosthodontics, School of Dentistry, The University of Jordan Hospital, The University of Jordan, Amman, Jordan
| | - Faleh Sawair
- Department of Oral and Maxillofacial Surgery, Oral Medicine and Periodontology, School of Dentistry, The University of Jordan Hospital, The University of Jordan, Amman, Jordan
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Manchester, UK
| | - Kifah Jamani
- Department of Prosthodontics, School of Dentistry, The University of Jordan Hospital, The University of Jordan, Amman, Jordan
| |
Collapse
|
3
|
Fink MC, Heitzmann N, Reitmeier V, Siebeck M, Fischer F, Fischer MR. Diagnosing virtual patients: the interplay between knowledge and diagnostic activities. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2023; 28:1245-1264. [PMID: 37052740 PMCID: PMC10099021 DOI: 10.1007/s10459-023-10211-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 01/22/2023] [Indexed: 06/19/2023]
Abstract
Clinical reasoning theories agree that knowledge and the diagnostic process are associated with diagnostic success. However, the exact contributions of these components of clinical reasoning to diagnostic success remain unclear. This is particularly the case when operationalizing the diagnostic process with diagnostic activities (i.e., teachable practices that generate knowledge). Therefore, we conducted a study investigating to what extent knowledge and diagnostic activities uniquely explain variance in diagnostic success with virtual patients among medical students. The sample consisted of N = 106 medical students in their third to fifth year of university studies in Germany (6-years curriculum). Participants completed professional knowledge tests before diagnosing virtual patients. Diagnostic success with the virtual patients was assessed with diagnostic accuracy as well as a comprehensive diagnostic score to answer the call for more extensive measurement of clinical reasoning outcomes. The three diagnostic activities hypothesis generation, evidence generation, and evidence evaluation were tracked. Professional knowledge predicted performance in terms of the comprehensive diagnostic score and displayed a small association with diagnostic accuracy. Diagnostic activities predicted comprehensive diagnostic score and diagnostic accuracy. Hierarchical regressions showed that the diagnostic activities made a unique contribution to diagnostic success, even when knowledge was taken into account. Our results support the argument that the diagnostic process is more than an embodiment of knowledge and explains variance in diagnostic success over and above knowledge. We discuss possible mechanisms explaining this finding.
Collapse
Affiliation(s)
- Maximilian C Fink
- Institute of Medical Education, University Hospital, LMU Munich, Munich, Germany
- Department for Education, University of the Bundeswehr Munich, Institute of Education, Learning and Teaching with Media, Werner-Heisenberg-Weg 39, 85577, Neubiberg, Germany
| | - Nicole Heitzmann
- Department of Psychology, LMU Munich, Munich, Germany
- Munich Center of the Learning Sciences (MCLS), LMU Munich, Munich, Germany
| | - Victoria Reitmeier
- Institute of Medical Education, University Hospital, LMU Munich, Munich, Germany
| | - Matthias Siebeck
- Institute of Medical Education, University Hospital, LMU Munich, Munich, Germany
- Munich Center of the Learning Sciences (MCLS), LMU Munich, Munich, Germany
| | - Frank Fischer
- Department of Psychology, LMU Munich, Munich, Germany
- Munich Center of the Learning Sciences (MCLS), LMU Munich, Munich, Germany
| | - Martin R Fischer
- Institute of Medical Education, University Hospital, LMU Munich, Munich, Germany.
- Munich Center of the Learning Sciences (MCLS), LMU Munich, Munich, Germany.
| |
Collapse
|
4
|
Stoehr F, Kämpgen B, Müller L, Zufiría LO, Junquero V, Merino C, Mildenberger P, Kloeckner R. Natural language processing for automatic evaluation of free-text answers - a feasibility study based on the European Diploma in Radiology examination. Insights Imaging 2023; 14:150. [PMID: 37726485 PMCID: PMC10509084 DOI: 10.1186/s13244-023-01507-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 08/18/2023] [Indexed: 09/21/2023] Open
Abstract
BACKGROUND Written medical examinations consist of multiple-choice questions and/or free-text answers. The latter require manual evaluation and rating, which is time-consuming and potentially error-prone. We tested whether natural language processing (NLP) can be used to automatically analyze free-text answers to support the review process. METHODS The European Board of Radiology of the European Society of Radiology provided representative datasets comprising sample questions, answer keys, participant answers, and reviewer markings from European Diploma in Radiology examinations. Three free-text questions with the highest number of corresponding answers were selected: Questions 1 and 2 were "unstructured" and required a typical free-text answer whereas question 3 was "structured" and offered a selection of predefined wordings/phrases for participants to use in their free-text answer. The NLP engine was designed using word lists, rule-based synonyms, and decision tree learning based on the answer keys and its performance tested against the gold standard of reviewer markings. RESULTS After implementing the NLP approach in Python, F1 scores were calculated as a measure of NLP performance: 0.26 (unstructured question 1, n = 96), 0.33 (unstructured question 2, n = 327), and 0.5 (more structured question, n = 111). The respective precision/recall values were 0.26/0.27, 0.4/0.32, and 0.62/0.55. CONCLUSION This study showed the successful design of an NLP-based approach for automatic evaluation of free-text answers in the EDiR examination. Thus, as a future field of application, NLP could work as a decision-support system for reviewers and support the design of examinations being adjusted to the requirements of an automated, NLP-based review process. CLINICAL RELEVANCE STATEMENT Natural language processing can be successfully used to automatically evaluate free-text answers, performing better with more structured question-answer formats. Furthermore, this study provides a baseline for further work applying, e.g., more elaborated NLP approaches/large language models. KEY POINTS • Free-text answers require manual evaluation, which is time-consuming and potentially error-prone. • We developed a simple NLP-based approach - requiring only minimal effort/modeling - to automatically analyze and mark free-text answers. • Our NLP engine has the potential to support the manual evaluation process. • NLP performance is better on a more structured question-answer format.
Collapse
Affiliation(s)
- Fabian Stoehr
- Department of Diagnostic and Interventional Radiology, University Medical Center, Johannes Gutenberg-University Mainz, Langenbeckst, 1, 55131, Mainz, Germany
| | - Benedikt Kämpgen
- Empolis Information Management GmbH, Leightonstraße 2, 97074, Würzburg, Germany
| | - Lukas Müller
- Department of Diagnostic and Interventional Radiology, University Medical Center, Johannes Gutenberg-University Mainz, Langenbeckst, 1, 55131, Mainz, Germany
| | - Laura Oleaga Zufiría
- Department of Radiology, Hospital Clínic de Barcelona, C. de Villarroel, 170, 08036, Barcelona, Spain
| | | | | | - Peter Mildenberger
- Department of Diagnostic and Interventional Radiology, University Medical Center, Johannes Gutenberg-University Mainz, Langenbeckst, 1, 55131, Mainz, Germany
| | - Roman Kloeckner
- Institute of Interventional Radiology, University Hospital Schleswig-Holstein, Campus Luebeck, Ratzeburger Allee 160, 23583, Luebeck, Germany.
| |
Collapse
|
5
|
Goins SM, French RJ, Martin JG. The Use of Structured Oral Exams for the Assessment of Medical Students in their Radiology Clerkship. Curr Probl Diagn Radiol 2023; 52:330-333. [PMID: 37032291 DOI: 10.1067/j.cpradiol.2023.03.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 03/16/2023] [Indexed: 04/11/2023]
Abstract
RATIONALE & OBJECTIVES There is increasing interest in narrative feedback and competency-based evaluation in medical student education. This study evaluates the implementation of a structured oral exam for a required radiology clerkship in furtherance of these aims. MATERIALS & METHODS A structured oral exam was instituted in academic year (AY) 20-21. Students prepared to discuss 5 varied imaging cases as they would to a medical colleague and as to a patient. For AY 20-21, students took the oral and a written exam. In AY 21-22, students took the oral exam alone and the written exam was discontinued. The perceived educational value of clerkship components, including the oral and written exam, were scored by the students on a 5-point Likert scale. RESULTS All students in AY 20-21 received a passing score on the written (mean 89.0, SD 4.59) and oral exams. All students in AY 21-22 received a passing score on the oral exam. In AY 20-21, the educational value of the oral exam was rated significantly higher than that of the written exam (4.30 vs 4.02, P = 0.021). There was no significant difference in rating of the oral exam between AY 20-21 and AY 21-22 (4.30 vs 4.38; P = 0.499). CONCLUSION The implementation of a structured final oral exam for a required radiology clerkship was felt to be successful in the aims of delivering educational value while evaluating students for competency. Further evaluation of oral exams for radiology medical student education are warranted to optimize the career preparation of future physicians.
Collapse
Affiliation(s)
| | - Robert J French
- Department of Radiology, Duke University School of Medicine, Durham, NC
| | - Jonathan G Martin
- Department of Radiology, Duke University School of Medicine, Durham, NC.
| |
Collapse
|
6
|
Kanzow P, Schmidt D, Herrmann M, Wassmann T, Wiegand A, Raupach T. Use of Multiple-Select Multiple-Choice Items in a Dental Undergraduate Curriculum: Retrospective Study Involving the Application of Different Scoring Methods. JMIR MEDICAL EDUCATION 2023; 9:e43792. [PMID: 36841970 PMCID: PMC10131704 DOI: 10.2196/43792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/06/2022] [Accepted: 02/25/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND Scoring and awarding credit are more complex for multiple-select items than for single-choice items. Forty-one different scoring methods were retrospectively applied to 2 multiple-select multiple-choice item types (Pick-N and Multiple-True-False [MTF]) from existing examination data. OBJECTIVE This study aimed to calculate and compare the mean scores for both item types by applying different scoring methods, and to investigate the effect of item quality on mean raw scores and the likelihood of resulting scores at or above the pass level (≥0.6). METHODS Items and responses from examinees (ie, marking events) were retrieved from previous examinations. Different scoring methods were retrospectively applied to the existing examination data to calculate corresponding examination scores. In addition, item quality was assessed using a validated checklist. Statistical analysis was performed using the Kruskal-Wallis test, Wilcoxon rank-sum test, and multiple logistic regression analysis (P<.05). RESULTS We analyzed 1931 marking events of 48 Pick-N items and 828 marking events of 18 MTF items. For both item types, scoring results widely differed between scoring methods (minimum: 0.02, maximum: 0.98; P<.001). Both the use of an inappropriate item type (34 items) and the presence of cues (30 items) impacted the scoring results. Inappropriately used Pick-N items resulted in lower mean raw scores (0.88 vs 0.93; P<.001), while inappropriately used MTF items resulted in higher mean raw scores (0.88 vs 0.85; P=.001). Mean raw scores were higher for MTF items with cues than for those without cues (0.91 vs 0.8; P<.001), while mean raw scores for Pick-N items with and without cues did not differ (0.89 vs 0.90; P=.09). Item quality also impacted the likelihood of resulting scores at or above the pass level (odds ratio ≤6.977). CONCLUSIONS Educators should pay attention when using multiple-select multiple-choice items and select the most appropriate item type. Different item types, different scoring methods, and presence of cues are likely to impact examinees' scores and overall examination results.
Collapse
Affiliation(s)
- Philipp Kanzow
- Department of Preventive Dentistry, Periodontology and Cariology, University Medical Center Göttingen, Göttingen, Germany
| | - Dennis Schmidt
- Department of Preventive Dentistry, Periodontology and Cariology, University Medical Center Göttingen, Göttingen, Germany
| | - Manfred Herrmann
- Division of Medical Education Research and Curriculum Development, Study Deanery of University Medical Center Göttingen, Göttingen, Germany
| | - Torsten Wassmann
- Department of Prosthodontics, University Medical Center Göttingen, Göttingen, Germany
| | - Annette Wiegand
- Department of Preventive Dentistry, Periodontology and Cariology, University Medical Center Göttingen, Göttingen, Germany
| | - Tobias Raupach
- Division of Medical Education Research and Curriculum Development, Study Deanery of University Medical Center Göttingen, Göttingen, Germany
- Department of Cardiology and Pneumology, University Medical Center Göttingen, Göttingen, Germany
- Institute for Medical Education, University Hospital Bonn, Bonn, Germany
| |
Collapse
|
7
|
Frey A, Leutritz T, Backhaus J, Hörnlein A, König S. Item format statistics and readability of extended matching questions as an effective tool to assess medical students. Sci Rep 2022; 12:20982. [PMID: 36470965 PMCID: PMC9723123 DOI: 10.1038/s41598-022-25481-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 11/30/2022] [Indexed: 12/12/2022] Open
Abstract
Testing based on multiple choice questions (MCQ) is one of the most established forms of assessment, not only in the medical field. Extended matching questions (EMQ) represent a specific type of MCQ designed to require higher levels of cognition, such as problem-solving. The purpose of this evaluation was to assess the suitability and efficiency of EMQ as an assessment method. EMQ were incorporated into the end-of-semester examination in internal medicine, in which 154 students participated, and compared with three established MCQ types. Item and examination quality were investigated, as well as readability and processing time. EMQ were slightly more difficult to score; however, both item discrimination and discrimination index were higher when compared to other item types. EMQ were found to be significantly longer and required more processing time, but readability was improved. Students judged EMQ as clearly challenging, but attributed significantly higher clinical relevance when compared to established MCQ formats. Using the Spearman-Brown prediction, only ten EMQ items would be needed to reproduce the Cronbach's alpha value of 0.75 attained for the overall examination. EMQ proved to be both efficient and suitable when assessing medical students, demonstrating powerful characteristics of reliability. Their expanded use in favor of common MCQ could save examination time without losing out on statistical quality.
Collapse
Affiliation(s)
- Anna Frey
- grid.411760.50000 0001 1378 7891Department of Internal Medicine I, University Hospital of Würzburg, Oberdürrbacher Str. 6-8, 97080 Würzburg, Germany ,grid.411760.50000 0001 1378 7891Institute of Medical Teaching and Medical Education Research, University Hospital of Würzburg, Würzburg, Germany
| | - Tobias Leutritz
- grid.411760.50000 0001 1378 7891Institute of Medical Teaching and Medical Education Research, University Hospital of Würzburg, Würzburg, Germany
| | - Joy Backhaus
- grid.411760.50000 0001 1378 7891Institute of Medical Teaching and Medical Education Research, University Hospital of Würzburg, Würzburg, Germany
| | - Alexander Hörnlein
- grid.8379.50000 0001 1958 8658University Datacenter, University of Würzburg, Würzburg, Germany
| | - Sarah König
- grid.411760.50000 0001 1378 7891Institute of Medical Teaching and Medical Education Research, University Hospital of Würzburg, Würzburg, Germany
| |
Collapse
|
8
|
Boev C. Psychometric Analysis and Evaluation of Next Gen Item Types: Enhanced Hot Spot. Nurse Educ 2022; 47:E154-E155. [PMID: 35881976 DOI: 10.1097/nne.0000000000001275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Affiliation(s)
- Christine Boev
- Wegmans School of Nursing, St John Fisher College, Rochester, New York
| |
Collapse
|
9
|
Betts J, Muntean W, Kim D, Kao SC. Evaluating Different Scoring Methods for Multiple Response Items Providing Partial Credit. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT 2022; 82:151-176. [PMID: 34992310 PMCID: PMC8725057 DOI: 10.1177/0013164421994636] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The multiple response structure can underlie several different technology-enhanced item types. With the increased use of computer-based testing, multiple response items are becoming more common. This response type holds the potential for being scored polytomously for partial credit. However, there are several possible methods for computing raw scores. This research will evaluate several approaches found in the literature using an approach that evaluates how the inclusion of scoring related to the selection/nonselection of both relevant and irrelevant information is incorporated extending Wilson's approach. Results indicated all methods have potential, but the plus/minus and true/false methods seemed the most promising for items using the "select all that apply" instruction set. Additionally, these methods showed a large increase in information per time unit over the dichotomous method.
Collapse
Affiliation(s)
- Joe Betts
- National Council of State Boards of Nursing, Chicago, IL, USA
| | - William Muntean
- National Council of State Boards of Nursing, Chicago, IL, USA
| | - Doyoung Kim
- National Council of State Boards of Nursing, Chicago, IL, USA
| | - Shu-chuan Kao
- National Council of State Boards of Nursing, Chicago, IL, USA
| |
Collapse
|
10
|
Huth KC, von Bronk L, Kollmuss M, Lindner S, Durner J, Hickel R, Draenert ME. Special Teaching Formats during the COVID-19 Pandemic-A Survey with Implications for a Crisis-Proof Education. J Clin Med 2021; 10:jcm10215099. [PMID: 34768621 PMCID: PMC8584389 DOI: 10.3390/jcm10215099] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 10/22/2021] [Accepted: 10/28/2021] [Indexed: 12/19/2022] Open
Abstract
Modern teaching formats have not been considered necessary during the COVID-19 pandemic with uncertain acceptance by students. The study’s aim was to describe and evaluate all measures undertaken for theoretical and practical knowledge/skill transfer, which included objective structured practical examinations (OSPEs) covering a communication skills training. The students’ performance in the OSPE as well as the theoretical knowledge level were assessed, of which the latter was compared with previous terms. In conservative dentistry and periodontology (4th and 5th year courses), theoretical teaching formats were provided online and completed by a multiple-choice test. Practical education continued without patients in small groups using the phantom-head, 3D printed teeth, and objective structured practical examinations (OSPEs) including communication skills training. Formats were evaluated by a questionnaire. The organization was rated as very good/good (88.6%), besides poor Internet connection (22.8%) and Zoom® (14.2%) causing problems. Lectures with audio were best approved (1.48), followed by practical videos (1.54), live stream lectures (1.81), treatment checklists (1.81), and virtual problem-based learning (2.1). Lectures such as .pdf files without audio, articles, or scripts were rated worse (2.15–2.30). Phantom-heads were considered the best substitute for patient treatment (59.5%), while additional methodical efforts for more realistic settings led to increased appraisal. However, students performed significantly worse in the multiple-choice test compared to the previous terms (p < 0.0001) and the OSPEs revealed deficits in the students’ communication skills. In the future, permanent available lectures with audio and efforts toward realistic treatment settings in the case of suspended patient treatment will be pursued.
Collapse
|
11
|
Fink MC, Heitzmann N, Siebeck M, Fischer F, Fischer MR. Learning to diagnose accurately through virtual patients: do reflection phases have an added benefit? BMC MEDICAL EDUCATION 2021; 21:523. [PMID: 34620156 PMCID: PMC8497044 DOI: 10.1186/s12909-021-02937-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 09/04/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND Simulation-based learning with virtual patients is a highly effective method that could potentially be further enhanced by including reflection phases. The effectiveness of reflection phases for learning to diagnose has mainly been demonstrated for problem-centered instruction with text-based cases, not for simulation-based learning. To close this research gap, we conducted a study on learning history-taking using virtual patients. In this study, we examined the added benefit of including reflection phases on learning to diagnose accurately, the associations between knowledge and learning, and the diagnostic process. METHODS A sample of N = 121 medical students completed a three-group experiment with a control group and pre- and posttests. The pretest consisted of a conceptual and strategic knowledge test and virtual patients to be diagnosed. In the learning phase, two intervention groups worked with virtual patients and completed different types of reflection phases, while the control group learned with virtual patients but without reflection phases. The posttest again involved virtual patients. For all virtual patients, diagnostic accuracy was assessed as the primary outcome. Current hypotheses were tracked during reflection phases and in simulation-based learning to measure diagnostic process. RESULTS Regarding the added benefit of reflection phases, an ANCOVA controlling for pretest performance found no difference in diagnostic accuracy at posttest between the three conditions, F(2, 114) = 0.93, p = .398. Concerning knowledge and learning, both pretest conceptual knowledge and strategic knowledge were not associated with learning to diagnose accurately through reflection phases. Learners' diagnostic process improved during simulation-based learning and the reflection phases. CONCLUSIONS Reflection phases did not have an added benefit for learning to diagnose accurately in virtual patients. This finding indicates that reflection phases may not be as effective in simulation-based learning as in problem-centered instruction with text-based cases and can be explained with two contextual differences. First, information processing in simulation-based learning uses the verbal channel and the visual channel, while text-based learning only draws on the verbal channel. Second, in simulation-based learning, serial cue cases are used to gather information step-wise, whereas, in text-based learning, whole cases are used that present all data at once.
Collapse
Affiliation(s)
- Maximilian C Fink
- Institute of Medical Education, University Hospital, LMU Munich, Pettenkoferstr. 8a, 80336, Munich, Germany.
- Institute of Education, Universität der Bundeswehr München, Neubiberg, Germany.
| | - Nicole Heitzmann
- Department of Psychology, LMU Munich, Munich, Germany
- Munich Center of the Learning Sciences, LMU Munich, Munich, Germany
| | - Matthias Siebeck
- Institute of Medical Education, University Hospital, LMU Munich, Pettenkoferstr. 8a, 80336, Munich, Germany
- Munich Center of the Learning Sciences, LMU Munich, Munich, Germany
| | - Frank Fischer
- Department of Psychology, LMU Munich, Munich, Germany
- Munich Center of the Learning Sciences, LMU Munich, Munich, Germany
| | - Martin R Fischer
- Institute of Medical Education, University Hospital, LMU Munich, Pettenkoferstr. 8a, 80336, Munich, Germany
- Munich Center of the Learning Sciences, LMU Munich, Munich, Germany
| |
Collapse
|
12
|
Cohen Aubart F, Lhote R, Hertig A, Noel N, Costedoat-Chalumeau N, Cariou A, Meyer G, Cymbalista F, de Prost N, Pottier P, Joly L, Lambotte O, Renaud MC, Badoual C, Braun M, Palombi O, Duguet A, Roux D. Progressive clinical case-based multiple-choice questions: An innovative way to evaluate and rank undergraduate medical students. Rev Med Interne 2021; 42:302-309. [PMID: 33518414 DOI: 10.1016/j.revmed.2020.11.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2020] [Revised: 10/04/2020] [Accepted: 11/10/2020] [Indexed: 11/28/2022]
Abstract
INTRODUCTION In France, at the end of the sixth year of medical studies, students take a national ranking examination including progressive clinical case-based multiple-choice questions (MCQs). We aimed to evaluate the ability of these MCQs for testing higher-order thinking more than knowledge recall, and to identify their characteristics associated with success and discrimination. METHODS We analysed the 72 progressive clinical cases taken by the students in the years 2016-2019, through an online platform. RESULTS A total of 72 progressive clinical cases (18 for each of the 4 studied years), corresponding to 1059 questions, were analysed. Most of the clinical cases (n=43, 60%) had 15 questions. Clinical questions represented 89% of all questions, whereas basic sciences questions accounted for 9%. The most frequent medical subspecialties were internal medicine (n=90, 8%) and infectious diseases (n=88, 8%). The most frequent question types concerned therapeutics (26%), exams (19%), diagnosis (14%), and semiology (13%). Level 2 questions ("understand and apply") accounted for 59% of all questions according to the Bloom's taxonomy. The level of Bloom's taxonomy significantly changed over time with a decreasing number of level 1 questions ("remember") (P=0.04). We also analysed the results of the students among 853 questions of training ECNi. Success and discrimination significantly decreased when the number of correct answers increased (P<0.0001 both). The success, discrimination, mean score, and mean number of discrepancies did not differ according to the diagnosis, exam, imaging, semiology, or therapeutic type of questions. CONCLUSION Progressive clinical case-based MCQs represent an innovative way to evaluate undergraduate students.
Collapse
Affiliation(s)
- F Cohen Aubart
- Service de médecine interne 2, hôpital Pitié-Salpêtrière, centre national de référence maladies systémiques rares et histiocytoses, Sorbonne université, Assistance publique-Hôpitaux de Paris, 47-83, boulevard de l'Hôpital, 75651 Paris cedex 13, France.
| | - R Lhote
- Service de médecine interne 2, hôpital Pitié-Salpêtrière, centre national de référence maladies systémiques rares et histiocytoses, Sorbonne université, Assistance publique-Hôpitaux de Paris, 47-83, boulevard de l'Hôpital, 75651 Paris cedex 13, France
| | - A Hertig
- Service de transplantation rénale, hôpital Pitié-Salpêtrière, Sorbonne université, Assistance publique-Hôpitaux de Paris, 75013 Paris, France
| | - N Noel
- Service de médecine interne, hôpital du Kremlin-Bicêtre, Assistance publique-Hôpitaux de Paris, 94250 Le Kremlin Bicêtre, France
| | - N Costedoat-Chalumeau
- Département de médecine interne, hôpital Cochin, Assistance publique-Hôpitaux de Paris, centre de référence maladies autoimmunes et systémiques rares, université de Paris, Cress, Inserm, INRA, 75014 Paris, France
| | - A Cariou
- Service de médecine intensive et réanimation, hôpital Cochin, Assistance publique-Hôpitaux de Paris, centre-université de Paris, 75014 Paris, France
| | - G Meyer
- Service de pneumologie, hôpital européen Georges-Pompidou, Assistance publique-Hôpitaux de Paris, 75015 Paris, France
| | - F Cymbalista
- Service d'hématologie, hôpital Avicenne, Assistance publique-Hôpitaux de Paris, 93000 Bobigny, France
| | - N de Prost
- Service de réanimation médicale, hôpitaux universitaires Henri-Mondor, Assistance publique-Hôpitaux de Paris, groupe de recherche clinique CARMAS, université Paris Est-Créteil, 94000 Créteil, France
| | - P Pottier
- Service de médecine interne, CHU de Nantes, université de Nantes, site Hôtel Dieu, 44000 Nantes, France
| | - L Joly
- Service de gériatrie, hôpitaux de Brabois, université de Lorraine, CHRU de Nancy, 54500 Vandoeuvre Les Nancy, France
| | - O Lambotte
- Service de médecine interne, hôpital du Kremlin-Bicêtre, Assistance publique-Hôpitaux de Paris, 94250 Le Kremlin Bicêtre, France
| | - M-C Renaud
- Faculté de médecine, Sorbonne université, 75013 Paris, France
| | - C Badoual
- Service d'anatomopathologie, hôpital européen Georges-Pompidou, université de Paris, 75015 Paris, France
| | - M Braun
- Service de neuroradiologie, CHRU de Nancy, université de Lorraine, 54500 Nancy, France
| | - O Palombi
- Service de neurochirurgie, CHU de Grenoble, université Grenoble Alpes, 38000 Grenoble, France
| | - A Duguet
- Service de pneumologie, hôpital Pitié-Salpêtrière, Sorbonne université, Assistance publique-Hôpitaux de Paris, 75013 Paris, France
| | - D Roux
- Service de médecine intensive réanimation, hôpital Louis-Mourier, université de Paris, Assistance publique-Hôpitaux de Paris, 92700 Colombes, France; Inserm, IAME, UMR-1137, 75018 Paris, France
| |
Collapse
|
13
|
Holzinger A, Lettner S, Steiner-Hofbauer V, Capan Melser M. How to assess? Perceptions and preferences of undergraduate medical students concerning traditional assessment methods. BMC MEDICAL EDUCATION 2020; 20:312. [PMID: 32943049 PMCID: PMC7499861 DOI: 10.1186/s12909-020-02239-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Accepted: 09/10/2020] [Indexed: 05/31/2023]
Abstract
BACKGROUND Medical students' perception of traditional assessments have an important impact on their approaches to learning. Even though these assessment formats such as Multiple-Choice Question (MCQ), Short Answer Question (SAQ) or oral examinations, are frequently used in medical curricula, only little is known about student's perceptions of these assessments. The objective of this study was to assess perceptions and preferences of undergraduate medical students concerning traditional assessment formats. METHODS The study was conducted at the Medical University of Vienna. Attitudes of 2nd year undergraduate medical students towards traditional assessment formats, and their relation to students' learning, and students' attitude towards objectivity, was inquired using a self-developed questionnaire. RESULTS 459 students participated in this study. MCQs examinations were the most preferred assessment format and were chosen as the most objective format. Most students agreed that oral examinations are more appropriate for achieving long-term knowledge. Female students showed higher preference for oral examinations than male students. Students would prefer an assessment mix of 41.8% MCQs, 24.0% oral examinations, and 9.5% SAQs, if they were free to choose the assessment tools. CONCLUSION Students prefer MCQ format over SAQs/oral examinations. Students' subjective perception of the importance of gaining long-term knowledge through an assessment has no influence on their assessment preference.
Collapse
Affiliation(s)
- Anita Holzinger
- Research Unit for Curriculum Development, Teaching Center/Medical University of Vienna, Spitalgasse 23, Bauteil 87, A-1090 Vienna, Austria
- University Clinic of Dentistry/Medical University of Vienna, Vienna, Austria
| | - Stefan Lettner
- University Clinic of Dentistry/Medical University of Vienna, Vienna, Austria
| | - Verena Steiner-Hofbauer
- Research Unit for Curriculum Development, Teaching Center/Medical University of Vienna, Spitalgasse 23, Bauteil 87, A-1090 Vienna, Austria
| | - Meskuere Capan Melser
- Research Unit for Curriculum Development, Teaching Center/Medical University of Vienna, Spitalgasse 23, Bauteil 87, A-1090 Vienna, Austria
| |
Collapse
|
14
|
Al Ojaimi M, Khairallah M, Younes R, Salloum S, Zgheib G. National Board of Medical Examiners and Curriculum Change: What Do Scores Tell Us? A Case Study at the University of Balamand Medical School. JOURNAL OF MEDICAL EDUCATION AND CURRICULAR DEVELOPMENT 2020; 7:2382120520925062. [PMID: 32782928 PMCID: PMC7383639 DOI: 10.1177/2382120520925062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 04/16/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVES This study describes the results of NBME (National Board of Medical Examiners) implementation in Balamand Medical School (BMS) from 2015 to 2019, after major curricular changes were introduced as of 2012. BMS students' performance was compared with the international USMLE step 1 (United States Medical Licensing Examination, herein referred to as step 1) cohorts' performances. The BMS students' NBME results were analyzed over the successive academic years to assess the impact of the serial curricular changes that were implemented. METHODS This longitudinal study describes the performance of BMS preclinical second year medicine (Med II) students on all their NBME exams over 4 academic years starting 2015-2016 to 2018-2019. These scores were compared with the step 1 comparison group scores using item difficulty. The t test was computed for each of the NBME exams to check whether the scores' differences were significant. RESULTS Results revealed that all BMS cohorts scored lower than the international USMLE step 1 comparison cohorts in all disciplines across the 4 academic years except Psychiatry. However, the results were progressively approaching step 1 results, and the difference between step 1 scores and BMS students' NBME scores became closer and not significant as of year 4. CONCLUSIONS The results of the study are promising. They show that the serial curricular changes enabled BMS Med II students' scores to reach the international cohorts' scores after 4 academic years. Moreover, the absence of statistical difference between cohort 4 scores and step 1 cohorts is not module dependent and applies to all clinical modules. Further studies should be conducted to assess whether the results obtained for cohort 4 can be maintained.
Collapse
Affiliation(s)
- Mode Al Ojaimi
- Faculty of Medicine, University of Balamand, Balamand, Lebanon
| | - Megan Khairallah
- Department of Education, University of Balamand, Balamand, Lebanon
| | - Rayya Younes
- Department of Education, University of Balamand, Balamand, Lebanon
| | - Sara Salloum
- Department of Education, University of Balamand, Balamand, Lebanon
| | - Ghania Zgheib
- Department of Education, University of Balamand, Balamand, Lebanon
| |
Collapse
|
15
|
Lahner FM, Lörwald AC, Bauer D, Nouns ZM, Krebs R, Guttormsen S, Fischer MR, Huwendiek S. Multiple true-false items: a comparison of scoring algorithms. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2018; 23:455-463. [PMID: 29189963 DOI: 10.1007/s10459-017-9805-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2016] [Accepted: 11/26/2017] [Indexed: 05/25/2023]
Abstract
Multiple true-false (MTF) items are a widely used supplement to the commonly used single-best answer (Type A) multiple choice format. However, an optimal scoring algorithm for MTF items has not yet been established, as existing studies yielded conflicting results. Therefore, this study analyzes two questions: What is the optimal scoring algorithm for MTF items regarding reliability, difficulty index and item discrimination? How do the psychometric characteristics of different scoring algorithms compare to those of Type A questions used in the same exams? We used data from 37 medical exams conducted in 2015 (998 MTF and 2163 Type A items overall). Using repeated measures analyses of variance (rANOVA), we compared reliability, difficulty and item discrimination of different scoring algorithms for MTF with four answer options and Type A. Scoring algorithms for MTF were dichotomous scoring (DS) and two partial credit scoring algorithms, PS50 where examinees receive half a point if more than half of true/false ratings were marked correctly and one point if all were marked correctly, and PS1/n where examinees receive a quarter of a point for every correct true/false rating. The two partial scoring algorithms showed significantly higher reliabilities (αPS1/n = 0.75; αPS50 = 0.75; αDS = 0.70, αA = 0.72), which corresponds to fewer items needed for a reliability of 0.8 (nPS1/n = 74; nPS50 = 75; nDS = 103, nA = 87), and higher discrimination indices (rPS1/n = 0.33; rPS50 = 0.33; rDS = 0.30; rA = 0.28) than dichotomous scoring and Type A. Items scored with DS tend to be difficult (pDS = 0.50), whereas items scored with PS1/n become easy (pPS1/n = 0.82). PS50 and Type A cover the whole range, from easy to difficult items (pPS50 = 0.66; pA = 0.73). Partial credit scoring leads to better psychometric results than dichotomous scoring. PS50 covers the range from easy to difficult items better than PS1/n. Therefore, for scoring MTF, we suggest using PS50.
Collapse
Affiliation(s)
- Felicitas-Maria Lahner
- Department of Assessment and Evaluation (AAE), Institute of Medical Education, University of Bern, Konsumstr 13, 3010, Bern, Switzerland.
| | - Andrea Carolin Lörwald
- Department of Assessment and Evaluation (AAE), Institute of Medical Education, University of Bern, Konsumstr 13, 3010, Bern, Switzerland
| | - Daniel Bauer
- Department of Education and Media, Institute of Medical Education, University of Bern, Bern, Switzerland
| | - Zineb Miriam Nouns
- Department of Assessment and Evaluation (AAE), Institute of Medical Education, University of Bern, Konsumstr 13, 3010, Bern, Switzerland
| | - René Krebs
- Department of Assessment and Evaluation (AAE), Institute of Medical Education, University of Bern, Konsumstr 13, 3010, Bern, Switzerland
| | - Sissel Guttormsen
- Institute of Medical Education, University of Bern, Bern, Switzerland
| | - Martin R Fischer
- Institute for Medical Education, University Hospital, LMU, Munich, Germany
| | - Sören Huwendiek
- Department of Assessment and Evaluation (AAE), Institute of Medical Education, University of Bern, Konsumstr 13, 3010, Bern, Switzerland
| |
Collapse
|
16
|
Effectiveness of longitudinal faculty development programs on MCQs items writing skills: A follow-up study. PLoS One 2017; 12:e0185895. [PMID: 29016659 PMCID: PMC5634605 DOI: 10.1371/journal.pone.0185895] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2017] [Accepted: 09/21/2017] [Indexed: 11/20/2022] Open
Abstract
This study examines the long-term impact of the faculty development programs on the multiple choice question (MCQ) items’ quality leading to study its effect on the students’ overall competency level during their yearly academic assessment. A series of longitudinal highly constructed faculty development workshops were conducted to improve the quality of the MCQs items writing skills. A total of 2207 MCQs were constructed by 58 participants for the assessment of 882 students’ cognitive competency level during the academic years 2012–2015. The MCQs were analyzed for the difficulty index (P-value), discriminating index (DI), presence/absence of item writing flaws (IWFs), and non-functioning distractors (NFDs), Bloom’s taxonomy cognitive levels, test reliability, and the rate of students’ scoring. Significant improvement in the difficulty index and DI were noticed during each successive academic year. Easy and poor discriminating questions, NFDs and IWFs were decreased significantly, whereas distractor efficiency (DE) mean score and high cognitive level (K2) questions were increased substantially during the each successive academic year. Improved MCQs’ quality leaded to increased competency level of the borderline students. Overall, the longitudinal faculty development workshops help in improving the quality of the MCQs items writing skills of the faculty that leads to students’ high competency levels.
Collapse
|
17
|
Findyartini A, Werdhani RA, Iryani D, Rini EA, Kusumawati R, Poncorini E, Primaningtyas W. Collaborative progress test (cPT) in three medical schools in Indonesia: the validity, reliability and its use as a curriculum evaluation tool. MEDICAL TEACHER 2015; 37:366-373. [PMID: 25186846 DOI: 10.3109/0142159x.2014.948831] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
BACKGROUND Three medical schools in Indonesia have been collaborating in evaluating the learning process in the current curriculum by administering a collaborative progress test (cPT). The 120 multiple choice questions for cPT was developed by the three schools. This study aimed to assess the validity and reliability of the cPT as a part of curriculum evaluation. METHOD The cPT was administered to year 1-5 students. A stratified random sampling based on the student Grade Point Average (GPA) was performed. The construct validity was established by assessing the accordant increase of mean score of cPT to the student year level. Finally, the reliability of the cPT was calculated using Cronbach Alpha coefficient. RESULT AND DISCUSSION A total of 223, 219 and 161 year 1 to 5 students completed the cPT in FM UI, FM UNAND, and FM UNS, respectively. The content and construct validity of the cPT were evident. There was an increase of the mean score from year 1 to 5, either in the pooled data (one way ANOVA F 174.7(4), p < 0.001) and in each school (one way ANOVA FMUI F 102.5 (4) p < 0.001, FM UNAND F 83.0 (4) p < 0.001, FM UNS 28.28(4) p < 0.001). The internal consistency of the cPT was very good in the three institutions. CONCLUSION The cPT was proven to be a valid and reliable test to measure the increase of knowledge of medical students and was also useful to provide feedback for curriculum evaluation in the three medical schools. Further improvement is required in assuring the test blueprint and the content of the test items.
Collapse
|
18
|
Mubuuke AG, Mwesigwa C, Maling S, Rukundo G, Kagawa M, Kitara DL, Kiguli S. Standardizing assessment practices of undergraduate medical competencies across medical schools: challenges, opportunities and lessons learned from a consortium of medical schools in Uganda. Pan Afr Med J 2014; 19:382. [PMID: 25995778 PMCID: PMC4430042 DOI: 10.11604/pamj.2014.19.382.5283] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2014] [Accepted: 12/13/2014] [Indexed: 11/25/2022] Open
Abstract
Introduction Health professions education is gradually moving away from the more traditional approaches to new innovative ways of training aimed at producing professionals with the necessary competencies to address the community health needs. In response to these emerging trends, Medical Education for Equitable Services to All Ugandans (MESAU), a consortium of Ugandan medical schools developed key competencies desirable of graduates and successfully implemented Competency Based Education (CBE) for undergraduate medical students. Objectives To examine the current situation and establish whether assessment methods of the competencies are standardized across MESAU schools as well as establish the challenges, opportunities and lessons learned from the MESAU consortium. Methods It was a cross-sectional descriptive study involving faculty of the medical schools in Uganda. Data was collected using focus group discussions and document reviews. Findings were presented in form of themes. Results Although the MESAU schools have implemented the developed competencies within their curricular, the assessment methods are still not standardized with each institution having its own assessment procedures. Lack of knowledge and skills regarding assessment of the competencies was evident amongst the faculty. The fear for change amongst lecturers was also noted as a major challenge. However, the institutional collaboration created while developing competencies was identified as key strength. Conclusion Findings demonstrated that despite having common competencies, there is no standardized assessment blue print applicable to all MESAU schools. Continued collaboration and faculty development in assessment is strongly recommended.
Collapse
Affiliation(s)
| | | | - Samuel Maling
- Mbarara University of Science & Technology, Faculty of Medicine, Mbarara, Uganda
| | - Godfrey Rukundo
- Kampala International University, Faculty of Medicine, Kampala, Uganda
| | - Mike Kagawa
- Makerere University, College of Health Sciences, Kampala, Uganda
| | | | - Sarah Kiguli
- Makerere University, College of Health Sciences, Kampala, Uganda
| |
Collapse
|
19
|
Abu-Zaid A, Khan TA. Assessing declarative and procedural knowledge using multiple-choice questions. MEDICAL EDUCATION ONLINE 2013; 18:21132. [PMID: 23702429 PMCID: PMC3662862 DOI: 10.3402/meo.v18i0.21132] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2013] [Revised: 04/24/2013] [Accepted: 04/24/2013] [Indexed: 06/02/2023]
Affiliation(s)
- Ahmed Abu-Zaid
- Ahmed Abu-Zaid, College of Medicine, Alfaisal University, P.O. Box 50927, Riyadh 11533, Saudi Arabia, Tel: +966 566 305 700, Fax: +966 1 215 7777.
| | | |
Collapse
|