1
|
Dyre L, Grierson L, Rasmussen KMB, Ringsted C, Tolsgaard MG. The concept of errors in medical education: a scoping review. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2022; 27:761-792. [PMID: 35190892 DOI: 10.1007/s10459-022-10091-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 01/05/2022] [Indexed: 06/14/2023]
Abstract
The purpose of this scoping review was to explore how errors are conceptualized in medical education contexts by examining different error perspectives and practices. This review used a scoping methodology with a systematic search strategy to identify relevant studies, written in English, and published before January 2021. Four medical education journals (Medical Education, Advances in Health Science Education, Medical Teacher, and Academic Medicine) and four clinical journals (Journal of the American Medical Association, Journal of General Internal Medicine, Annals of Surgery, and British Medical Journal) were purposively selected. Data extraction was charted according to a data collection form. Of 1505 screened studies, 79 studies were included. Three overarching perspectives were identified: 'understanding errors') (n = 31), 'avoiding errors' (n = 25), 'learning from errors' (n = 23). Studies that aimed at'understanding errors' used qualitative methods (19/31, 61.3%) and took place in the clinical setting (19/31, 61.3%), whereas studies that aimed at 'avoiding errors' and 'learning from errors' used quantitative methods ('avoiding errors': 20/25, 80%, and 'learning from errors': 16/23, 69.6%, p = 0.007) and took place in pre-clinical (14/25, 56%) and simulated settings (10/23, 43.5%), respectively (p < 0.001). The three perspectives differed significantly in terms of inclusion of educational theory: 'Understanding errors' studies 16.1% (5/31),'avoiding errors' studies 48% (12/25), and 'learning from errors' studies 73.9% (17/23), p < 0.001. Errors in medical education and clinical practice are defined differently, which makes comparisons difficult. A uniform understanding is not necessarily a goal but improving transparency and clarity of how errors are currently conceptualized may improve our understanding of when, why, and how to use and learn from errors in the future.
Collapse
Affiliation(s)
- Liv Dyre
- Copenhagen Academy for Medical Education and Simulation (CAMES), Copenhagen University, Rigshospitalet, Ryesgade 53B, DK-2100, Copenhagen, Denmark.
| | - Lawrence Grierson
- Department of Family Medicine, Health Sciences Education Program, McMaster University, Toronto, Canada
| | - Kasper Møller Boje Rasmussen
- Copenhagen Academy for Medical Education and Simulation (CAMES), Copenhagen University, Rigshospitalet, Ryesgade 53B, DK-2100, Copenhagen, Denmark
- Department of Otorhinolaryngology, Head and Neck Surgery and Audiology, Copenhagen University Hospital Rigshospitalet, Blegdamsvej 9, DK-2100, Copenhagen, Denmark
| | | | - Martin G Tolsgaard
- Copenhagen Academy for Medical Education and Simulation (CAMES), Copenhagen University, Rigshospitalet, Ryesgade 53B, DK-2100, Copenhagen, Denmark
- Department of Obstetrics, Copenhagen University, Rigshospitalet, Blegdamsvej 9, DK-2100, Copenhagen, Denmark
| |
Collapse
|
2
|
Rajan KK, Pandit AS. Comparing computer-assisted learning activities for learning clinical neuroscience: a randomized control trial. BMC MEDICAL EDUCATION 2022; 22:522. [PMID: 35780115 PMCID: PMC9250740 DOI: 10.1186/s12909-022-03578-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Accepted: 06/23/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Computer-assisted learning has been suggested to improve enjoyment and learning efficacy in medical education and more specifically, in neuroscience. These range from text-based websites to interactive electronic modules (eModules). It remains uncertain how these can best be implemented. To assess the effects of interactivity on learning perceptions and efficacy, we compared the utility of an eModule using virtual clinical cases and graphics against a Wikipedia-like page of matching content to teach clinical neuroscience: fundamentals of stroke and cerebrovascular anatomy. METHODS A randomized control trial of using an interactive eModule versus a Wikipedia-like page without interactivity was performed. Participants remotely accessed their allocated learning activity once, for approximately 30 min. The primary outcome was the difference in perceptions on enjoyability, engagement and usefulness. The secondary outcome was the difference in learning efficacy between the two learning activities. These were assessed using a Likert-scale survey and two knowledge quizzes: one immediately after the learning activity and one repeated eight weeks later. Assessments were analysed using Mann-Whitney U and T-tests respectively. RESULTS Thirty-two medical students participated: allocated evenly between the two groups through randomisation. The eModule was perceived as significantly more engaging (p = 0.0005), useful (p = 0.01) and enjoyable (p = 0.001) by students, with the main contributing factors being interactivity and clinical cases. After both learning activities, there was a significant decrease between the first and second quiz scores for both the eModule group (-16%, p = 0.001) and Wikipedia group (-17%, p = 0.003). There was no significant difference in quiz scores between the eModule and Wikipedia groups immediately afterwards (86% vs 85%, p = 0.8) or after eight weeks (71% vs 68%, p = 0.7). CONCLUSION Our study shows that increased student satisfaction associated with interactive computer-assisted learning in the form of an eModule does not translate into increased learning efficacy as compared to using a Wikipedia-like webpage. This suggests the matched content of the passive webpage provides a similar learning efficacy. Still, eModules can help motivate self-directed learners and overcome the perceived difficulty associated with neuroscience. As computer assisted learning continues to rapidly expand among medical schools, we suggest educators critically evaluate the usage and cost-benefit of eModules.
Collapse
Affiliation(s)
- Kiran Kasper Rajan
- Bristol Medical School (PHS), University of Bristol, Canynge Hall, 39 Whatley Road, Bristol, BS8 2PS, UK.
- GKT School of Medical Education, King's College London, London, UK.
| | - Anand S Pandit
- Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, Queen Square, London, WC1N 3BG, UK
| |
Collapse
|
3
|
Manning E, Gagnon M. The complex patient: A concept clarification. Nurs Health Sci 2017; 19:13-21. [PMID: 28054430 DOI: 10.1111/nhs.12320] [Citation(s) in RCA: 67] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2016] [Revised: 10/15/2016] [Accepted: 10/19/2016] [Indexed: 01/21/2023]
Abstract
Over the last decade, the concept of the "complex patient" has not only been more widely used in multidisciplinary healthcare teams and across various healthcare disciplines, but it has also become more vacuous in meaning. The uptake of the concept of the "complex patient" spans across disciplines, such as medicine, nursing, and social work, with no consistent definition. We review the chronological evolution of this concept and its surrogate terms, namely "comorbidity," "multimorbidity," "polypathology," "dual diagnosis," and "multiple chronic conditions." Drawing on key principles of concept clarification, we highlight disciplinary usage in the literature published between 2005 and 2015 in health sciences, attending to overlaps and revealing nuances of the complex patient concept. Finally, we discuss the implications of this concept for practice, research, and theory.
Collapse
Affiliation(s)
- Eli Manning
- Department of Gender, Sexuality, and Women's Studies, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Marilou Gagnon
- School of Nursing, Faculty of Health Sciences, University of Ottawa, Ottawa, Ontario, Canada
| |
Collapse
|
4
|
Hassan BA, Elfaki OA, Khan MA. The impact of outpatient clinical teaching on students' academic performance in obstetrics and gynecology. J Family Community Med 2017; 24:196-199. [PMID: 28932165 PMCID: PMC5596633 DOI: 10.4103/jfcm.jfcm_48_16] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
INTRODUCTION Clinical teaching at outpatient settings is an essential part of undergraduate medical students' training. The increasing number of students in many medical schools and short hospital stays makes inpatient teaching alone insufficient to provide students with the required clinical skills. To make up this shortfall, outpatient clinical teaching has been implemented by our Department of Obstetrics and Gynecology, King Khalid University, KSA, throughout the academic year 2015-2016. The aim of this study was to evaluate the impact of clinical teaching at outpatient settings on the academic performance of our students. MATERIALS AND METHODS In this comparative retrospective study, the effects of outpatient clinical teaching of obstetrics and gynecology on the academic performance of student was assessed through an objective structured clinical examination (OSCE). During their course on obstetrics and gynecology, 58 students had their clinical teaching both at inpatient and outpatient settings and constituted "study group". The remaining 52 students had clinical teaching only at inpatient settings and were considered "control group". Students in both groups sat for OSCE at the end of week 8 of the gynecology course. Students in both groups sat for OSCE at the end of week 8 of the gynecology course. Four stations were used for assessment: obstetric history, gynecological history, obstetric physical examination of pregnant women, and gynecological procedure station. Twenty marks were allocated for each station giving a total score of 80. The OSCE scores for study group were compared with those of the control group using Student's t-test; p < 0.05 was considered statistically significant. RESULTS The total mean OSCE score was statistically significantly higher in the study group (62.36 vs. 47.94, p < 0.001). The study group participants showed significantly higher scores in the gynecological procedure station (16.74 vs. 11.62, p < 0.0001) and obstetric examination station (16.72 vs. 10.79, p < 0.0001). CONCLUSION Clinical teaching at outpatient settings leads to an improvement in students' performance in OSCE. There is evidence of remarkable improvement in the mastery of clinical skills as manifested in the students' scores in physical examination and procedures stations. These results will encourage us to have clinical teaching in other disciplines at outpatient settings.
Collapse
Affiliation(s)
- Bahaeldin A Hassan
- Department of Obstetrics and Gynecology, College of Medicine, King Khalid University, Abha, Saudi Arabia
| | - Omer A Elfaki
- Department of Medical Education, College of Medicine, King Khalid University, Abha, Saudi Arabia
| | - Muhammed A Khan
- Department of Medical Education, College of Medicine, King Khalid University, Abha, Saudi Arabia
| |
Collapse
|
5
|
Cook DA, Thompson WG, Thomas KG. Test-enhanced web-based learning: optimizing the number of questions (a randomized crossover trial). ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:169-175. [PMID: 24280856 DOI: 10.1097/acm.0000000000000084] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
PURPOSE Questions enhance learning in Web-based courses, but preliminary evidence suggests that too many questions may interfere with learning. The authors sought to determine how varying the number of self-assessment questions affects knowledge outcomes in a Web-based course. METHOD The authors conducted a randomized crossover trial in one internal medicine and one family medicine residency program between January 2009 and July 2010. Eight Web-based modules on ambulatory medicine topics were developed, with varying numbers of self-assessment questions (0, 1, 5, 10, or 15). Participants completed modules in four different formats each year, with sequence randomly assigned. Participants completed a pretest for half their modules. Outcomes included knowledge, completion time, and module ratings. RESULTS One hundred eighty residents provided data. The mean (standard error) percent correct knowledge score was 53.2 (0.8) for pretests and 73.7 (0.5) for posttests. In repeated-measures analysis pooling all data, mean posttest knowledge scores were highest for the 10- and 15-question formats (75.7 [1.1] and 74.4 [1.0], respectively) and lower for 0-, 1-, and 5-question formats (73.1 [1.3], 72.9 [1.0], and 72.8 [1.5], respectively); P = .04 for differences across all modules. Modules with more questions generally took longer to complete and were rated higher, although differences were small. Residents most often identified 10 questions as ideal. Posttest knowledge scores were higher for modules that included a pretest (75.4 [0.9] versus 72.2 [0.9]; P = .0002). CONCLUSIONS Increasing the number of self-assessment questions improves learning until a plateau beyond which additional questions do not add value.
Collapse
Affiliation(s)
- David A Cook
- Dr. Cook is professor of medicine and medical education, Department of Medicine, College of Medicine, Mayo Clinic, and director, Office of Education Research, Mayo Medical School, Rochester, Minnesota. Dr. Thompson is associate professor of medicine, Department of Medicine, College of Medicine, Mayo Clinic, Rochester, Minnesota. Dr. Thomas is associate professor of medicine, Department of Medicine, College of Medicine, Mayo Clinic, Rochester, Minnesota
| | | | | |
Collapse
|
6
|
West CP, Shanafelt TD, Cook DA. Lack of association between resident doctors' well-being and medical knowledge. MEDICAL EDUCATION 2010; 44:1224-1231. [PMID: 21091761 DOI: 10.1111/j.1365-2923.2010.03803.x] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
OBJECTIVES Resident doctors' (residents) well-being impacts on the medical care they provide. Despite the high prevalence of resident doctors' distress, the relationship between their well-being and the specific competencies defined by the Accreditation Council for Graduate Medical Education is poorly understood. We evaluated the association of resident well-being with medical knowledge as assessed on both a standardised test of general medical knowledge and at the end of web-based courses on a series of focused topics. METHODS We conducted a repeated cross-sectional study of associations between well-being and medical knowledge scores over time for internal medicine residents from July 2004 to June 2007. Well-being measures included linear analogue self-assessment (LASA) scales measuring quality of life (including overall quality of life, mental, physical and emotional well-being, and fatigue), the Medical Outcome Study Eight-Item Short Form Health Survey (SF-8) assessment of mental and physical well-being, the Maslach Burnout Inventory and the PRIME-MD two-item depression screen. We also measured empathy using the perspective taking and empathic concern subscales of the Interpersonal Reactivity Index. Medical knowledge measures included scores on web-based learning module post-tests and scores on the national Internal Medicine In-Training Examination (IM-ITE). As data for each association were available for at least 126 residents, this study was powered to detect a small-to-moderate effect size of 0.3 standard deviations. RESULTS No statistically significant associations were observed between well-being and either web-based learning module post-test score or IM-ITE score. Parameter estimates of the association of well-being variables with knowledge scores were uniformly small. For all well-being metrics, meaningful differences were associated with knowledge score difference estimates of < 1 percentage point. CONCLUSIONS Resident well-being appears to have limited association with competence in medical knowledge as assessed following web-based courses on specific topics or using standardised general medical examinations.
Collapse
Affiliation(s)
- Colin P West
- Division of General Internal Medicine, Mayo Clinic, 200 First Street SW, Rochester, MN 55905, USA.
| | | | | |
Collapse
|
7
|
Cook DA, Levinson AJ, Garside S. Time and learning efficiency in Internet-based learning: a systematic review and meta-analysis. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2010; 15:755-70. [PMID: 20467807 DOI: 10.1007/s10459-010-9231-x] [Citation(s) in RCA: 92] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2010] [Accepted: 04/26/2010] [Indexed: 05/21/2023]
Abstract
UNLABELLED Authors have claimed that Internet-based instruction promotes greater learning efficiency than non-computer methods. OBJECTIVES determine, through a systematic synthesis of evidence in health professions education, how Internet-based instruction compares with non-computer instruction in time spent learning, and what features of Internet-based instruction are associated with improved learning efficiency. DATA SOURCES we searched databases including MEDLINE, CINAHL, EMBASE, and ERIC from 1990 through November 2008. STUDY SELECTION AND DATA ABSTRACTION we included all studies quantifying learning time for Internet-based instruction for health professionals, compared with other instruction. Reviewers worked independently, in duplicate, to abstract information on interventions, outcomes, and study design. RESULTS we identified 20 eligible studies. Random effects meta-analysis of 8 studies comparing Internet-based with non-Internet instruction (positive numbers indicating Internet longer) revealed pooled effect size (ES) for time -0.10 (p = 0.63). Among comparisons of two Internet-based interventions, providing feedback adds time (ES 0.67, p =0.003, two studies), and greater interactivity generally takes longer (ES 0.25, p = 0.089, five studies). One study demonstrated that adapting to learner prior knowledge saves time without significantly affecting knowledge scores. Other studies revealed that audio narration, video clips, interactive models, and animations increase learning time but also facilitate higher knowledge and/or satisfaction. Across all studies, time correlated positively with knowledge outcomes (r = 0.53, p = 0.021). CONCLUSIONS on average, Internet-based instruction and non-computer instruction require similar time. Instructional strategies to enhance feedback and interactivity typically prolong learning time, but in many cases also enhance learning outcomes. Isolated examples suggest potential for improving efficiency in Internet-based instruction.
Collapse
Affiliation(s)
- David A Cook
- Division of General Internal Medicine and Office of Education Research, Mayo Clinic College of Medicine, Baldwin 4-A, 200 First Street SW, Rochester, MN 55905, USA.
| | | | | |
Collapse
|
8
|
Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Instructional design variations in internet-based learning for health professions education: a systematic review and meta-analysis. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2010; 85:909-22. [PMID: 20520049 DOI: 10.1097/acm.0b013e3181d6c319] [Citation(s) in RCA: 328] [Impact Index Per Article: 21.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
PURPOSE A recent systematic review (2008) described the effectiveness of Internet-based learning (IBL) in health professions education. A comprehensive synthesis of research investigating how to improve IBL is needed. This systematic review sought to provide such a synthesis. METHOD The authors searched MEDLINE, CINAHL, EMBASE, Web of Science, Scopus, ERIC, TimeLit, and the University of Toronto Research and Development Resource Base for articles published from 1990 through November 2008. They included all studies quantifying the effect of IBL compared with another Internet-based or computer-assisted instructional intervention on practicing and student physicians, nurses, pharmacists, dentists, and other health professionals. Reviewers working independently and in duplicate abstracted information, coded study quality, and grouped studies according to inductively identified themes. RESULTS From 2,705 articles, the authors identified 51 eligible studies, including 30 randomized trials. The pooled effect size (ES) for learning outcomes in 15 studies investigating high versus low interactivity was 0.27 (95% confidence interval, 0.08-0.46; P = .006). Also associated with higher learning were practice exercises (ES 0.40 [0.08-0.71; P = .01]; 10 studies), feedback (ES 0.68 [0.01-1.35; P = .047]; 2 studies), and repetition of study material (ES 0.19 [0.09-0.30; P < .001]; 2 studies). The ES was 0.26 (-0.62 to 1.13; P = .57) for three studies examining online discussion. Inconsistency was large (I(2) >or=89%) in most analyses. Meta-analyses for other themes generally yielded imprecise results. CONCLUSIONS Interactivity, practice exercises, repetition, and feedback seem to be associated with improved learning outcomes, although inconsistency across studies tempers conclusions. Evidence for other instructional variations remains inconclusive.
Collapse
Affiliation(s)
- David A Cook
- Office of Education Research, College of Medicine, Mayo Clinic, Rochester, Minnesota 55905, USA.
| | | | | | | | | | | |
Collapse
|
9
|
Cook DA, Thompson WG, Thomas KG. Case-based or non-case-based questions for teaching postgraduate physicians: a randomized crossover trial. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2009; 84:1419-1425. [PMID: 19881436 DOI: 10.1097/acm.0b013e3181b6b36e] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
PURPOSE The comparative efficacy of case-based (CB) and non-CB self-assessment questions in Web-based instruction is unknown. The authors sought to compare CB and non-CB questions. METHOD The authors conducted a randomized crossover trial in the continuity clinics of two academic residency programs. Four Web-based modules on ambulatory medicine were developed in both CB (periodic questions based on patient scenarios) and non-CB (questions matched for content but lacking patient scenarios) formats. Participants completed two modules in each format (sequence randomly assigned). Participants also completed a pretest of applied knowledge for two modules (randomly assigned). RESULTS For the 130 participating internal medicine and family medicine residents, knowledge scores improved significantly (P < .0001) from pretest (mean: 53.5; SE: 1.1) to posttest (75.1; SE: 0.7). Posttest knowledge scores were similar in CB (75.0; SE: 0.1) and non-CB formats (74.7; SE: 1.1); the 95% CI was -1.6, 2.2 (P = .76). A nearly significant (P = .062) interaction between format and the presence or absence of pretest suggested a differential effect of question format, depending on pretest. Overall, those taking pretests had higher posttest knowledge scores (76.7; SE: 1.1) than did those not taking pretests (73.0; SE: 1.1; 95% CI: 1.7, 5.6; P = .0003). Learners preferred the CB format. Time required was similar (CB: 42.5; SE: 1.8 minutes, non-CB: 40.9; SE: 1.8 minutes; P = .22). CONCLUSIONS Our findings suggest that, among postgraduate physicians, CB and non-CB questions have similar effects on knowledge scores, but learners prefer CB questions. Pretests influence posttest scores.
Collapse
Affiliation(s)
- David A Cook
- Office of Education Research, Division of General Internal Medicine, College of Medicine, Mayo Clinic, Baldwin 4-A, 200 First Street SW, Rochester, MN 55905, USA.
| | | | | |
Collapse
|
10
|
Cook DA, Beckman TJ, Thomas KG, Thompson WG. Adapting web-based instruction to residents' knowledge improves learning efficiency: a randomized controlled trial. J Gen Intern Med 2008; 23:985-90. [PMID: 18612729 PMCID: PMC2517946 DOI: 10.1007/s11606-008-0541-0] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
BACKGROUND Increased clinical demands and decreased available time accentuate the need for efficient learning in postgraduate medical training. Adapting Web-based learning (WBL) to learners' prior knowledge may improve efficiency. OBJECTIVE We hypothesized that time spent learning would be shorter and test scores not adversely affected for residents who used a WBL intervention that adapted to prior knowledge. DESIGN Randomized, crossover trial. SETTING Academic internal medicine residency program continuity clinic. PARTICIPANTS 122 internal medicine residents. INTERVENTIONS Four WBL modules on ambulatory medicine were developed in standard and adaptive formats. The adaptive format allowed learners who correctly answered case-based questions to skip the corresponding content. MEASUREMENTS AND MAIN RESULTS The measurements were knowledge posttest, time spent on modules, and format preference. One hundred twenty-two residents completed at least 1 module, and 111 completed all 4. Knowledge scores were similar between the adaptive format (mean +/- standard error of the mean, 76.2 +/- 0.9) and standard (77.2 +/- 0.9, 95% confidence interval [CI] for difference -3.0 to 1.0, P = .34). However, time spent was lower for the adaptive format (29.3 minutes [CI 26.0 to 33.0] per module) than for the standard (35.6 [31.6 to 40.3]), an 18% decrease in time (CI 9 to 26%, P = .0003). Seventy-two of 96 respondents (75%) preferred the adaptive format. CONCLUSIONS Adapting WBL to learners' prior knowledge can reduce learning time without adversely affecting knowledge scores, suggesting greater learning efficiency. In an era where reduced duty hours and growing clinical demands on trainees and faculty limit the time available for learning, such efficiencies will be increasingly important. For clinical trial registration, see http://www.clinicaltrials.gov NCT00466453 ( http://www.clinicaltrials.gov/ct/show/NCT00466453?order=1 ).
Collapse
Affiliation(s)
- David A Cook
- Division of General Internal Medicine, Mayo Clinic College of Medicine, Rochester, MN, USA.
| | | | | | | |
Collapse
|