Effectiveness of e-Learning in a Medical School 2.0 Model: Comparison of Item Analysis for Student-Generated vs. Faculty-Generated Multiple-Choice Questions.
Stud Health Technol Inform 2019;
257:184-188. [PMID:
30741193]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
BACKGROUND
Early reports in the literature describe using student-generated questions as a method of student learning as well as augmenting question exam banks. Reports on the performance of student-generated questions versus faculty-generated questions, however, remain limited. This study aims to compare the question performance of student-generated versus faculty-generated multiple-choice questions (MCQ).
OBJECTIVES
To determine if student-generated questions using mobile audience response systems and online discussion boards have similar item discrimination scores as faculty-generated questions.
METHODS
A team-based learning session was used to create 113 student-generated multiple-choice questions (SGQs). A 20 question MCQ quiz was presented to a second year medical school class made of 10 randomly selected SGQs and 10 randomly selected faculty-generated multiple-choice questions (FGQs). Item analysis was performed on the test results.
RESULTS
The data showed no statistical difference in the point-biserial scores between the two groups (average point-biserial 0.31 students vs 0.36 faculty, p=0.14), with 90% of student-generated and 100% of faculty-generated questions meeting a cut-off of point-biserial score >0.2. Interestingly, student-generated questions were statistically more difficult than the faculty-generated questions (Item Difficulty score 0.46 students vs 0.69 faculty, p=0.003).
CONCLUSIONS
This study suggests that student-generated compared to faculty-generated MCQs have similar item discrimination scores, but are perhaps more difficult questions.
Collapse