1
|
Shammas M, Nagda S, Shah C, Baxi G, Gadde P, Sachdeva S, Gupta D, Wali O, Dhall RS, Gajdhar S. An assessment of preclinical removable prosthodontics based on multiple-choice questions: Stakeholders' perceptions. J Dent Educ 2024; 88:533-543. [PMID: 38314889 DOI: 10.1002/jdd.13462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Revised: 12/05/2023] [Accepted: 01/06/2024] [Indexed: 02/07/2024]
Abstract
PURPOSE Item analysis of multiple-choice questions (MCQs) is an essential tool for identifying items that can be stored, revised, or discarded to build a quality MCQ bank. This study analyzed MCQs based on item analysis to develop a pool of valid and reliable items and investigate stakeholders' perceptions regarding MCQs in a written summative assessment (WSA) based on this item analysis. METHODS In this descriptive study, 55 questions each from 2016 to 2019 of WSA in preclinical removable prosthodontics for fourth-year undergraduate dentistry students were analyzed for item analysis. Items were categorized according to their difficulty index (DIF I) and discrimination index (DI). Students (2021-2022) were assessed using this question bank. Students' perceptions of and feedback from faculty members concerning this assessment were collected using a questionnaire with a five-point Likert scale. RESULTS Of 220 items when both indices (DIF I and DI) were combined, 144 (65.5%) were retained in the question bank, 66 (30%) required revision before incorporation into the question bank, and only 10 (4.5%) were discarded. The mean DIF I and DI values were 69% (standard deviation [Std.Dev] = 19) and 0.22 (Std.Dev = 0.16), respectively, for 220 MCQs. The mean scores from the questionnaire for students and feedback from faculty members ranged from 3.50 to 4.04 and from 4 to 5, respectively, indicating that stakeholders tended to agree and strongly agree, respectively, with the proposed statements. CONCLUSION This study assisted the prosthodontics department in creating a set of prevalidated questions with known difficulty and discrimination capacity.
Collapse
Affiliation(s)
- Mohammed Shammas
- Division of Prosthodontics, Department of Oral and Maxillofacial Rehabilitation, Ibn Sina National College for Medical Studies, Al Mahjar, Jeddah, Saudi Arabia
| | | | - Chinmay Shah
- Department of Physiology, Government Medical College, Bhavnagar, Gujarat, India
| | - Gaurang Baxi
- Dr. D. Y. Patil College of Physiotherapy, Dr. D. Y. Patil Vidyapeeth, Pune, Maharashtra, India
| | - Praveen Gadde
- Department of Public Health Dentistry, Vishnu Dental College, Bhimavaram, West Godavari (Dt), Andhra Pradesh, India
| | - Shabina Sachdeva
- Department of Prosthodontics, Faculty of Dentistry, Jamia Millia Islamia, New Delhi, India
| | - Deeksha Gupta
- Department of Prosthodontics, MP Dental College and Hospital, Vadodara, Gujarat, India
| | - Othman Wali
- Division of Periodontics, Department of Oral Basic and Clinical Sciences, Ibn Sina National College for Medical Studies, Al Mahjar, Jeddah, Saudi Arabia
| | - Rupinder Singh Dhall
- Department of Prosthodontics, Himachal Institute of Dental Sciences, Paonta Sahib, Himachal Pradesh, India
| | - Shaiq Gajdhar
- Division of Prosthodontics, Department of Oral and Maxillofacial Rehabilitation, Ibn Sina National College for Medical Studies, Al Mahjar, Jeddah, Saudi Arabia
| |
Collapse
|
2
|
Rezigalla AA, Eleragi AMESA, Elhussein AB, Alfaifi J, ALGhamdi MA, Al Ameer AY, Yahia AIO, Mohammed OA, Adam MIE. Item analysis: the impact of distractor efficiency on the difficulty index and discrimination power of multiple-choice items. BMC MEDICAL EDUCATION 2024; 24:445. [PMID: 38658912 PMCID: PMC11040895 DOI: 10.1186/s12909-024-05433-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Accepted: 04/15/2024] [Indexed: 04/26/2024]
Abstract
BACKGROUND Distractor efficiency (DE) of multiple-choice questions (MCQs) responses is a component of the psychometric analysis used by the examiners to evaluate the distractors' credibility and functionality. This study was conducted to evaluate the impact of the DE on the difficulty and discrimination indices. METHODS This cross-sectional study was conducted from April to June 2023. It utilizes the final exam of the Principles of Diseases Course with 45 s-year students. The exam consisted of 60 type A MCQs. Item analysis (IA) was generated to evaluate KR20, difficulty index (DIF), discrimination index (DIS), and distractor efficiency (DE). DIF was calculated as the percentage of examinees who scored the item correctly. DIS is an item's ability to discriminate between higher and lower 27% of examinees. For DE, any distractor selected by less than 5% is considered nonfunctional, and items were classified according to the non-functional distractors. The correlation and significance of variance between DIF, DI, and DE were evaluated. RESULTS The total number of examinees was 45. The KR-20 of the exam was 0.91. The mean (M), and standard deviation (SD) of the DIF of the exam was 37.5(19.1), and the majority (69.5%) were of acceptable difficulty. The M (SD) of the DIS was 0.46 (0.22), which is excellent. Most items were excellent in discrimination (69.5%), only two were not discriminating (13.6%), and the rest were of acceptable power (16.9%). Items with excellent and good efficiency represent 37.3% each, while only 3.4% were of poor efficiency. The correlation between DE and DIF (p = 0.000, r= -0.548) indicates that items with efficient distractors (low number of NFD) are associated with those having a low difficulty index (difficult items) and vice versa. The correlation between DE and DIS is significantly negative (P = 0.0476, r=-0.259). In such a correlation, items with efficient distractors are associated with low-discriminating items. CONCLUSIONS There is a significant moderate negative correlation between DE and DIF (P = 0.00, r = -0.548) and a significant weak negative correlation between DE and DIS (P = 0.0476, r = -0.259). DIF has a non-significant negative correlation with DIS (P = 0.7124, r = -0.0492). DE impacts both DIF and DIS. Items with efficient distractors (low number of NFD) are associated with those having a low difficulty index (difficult items) and discriminating items. Improving the quality of DE will decrease the number of NFDs and result in items with acceptable levels of difficulty index and discrimination power.
Collapse
Affiliation(s)
- Assad Ali Rezigalla
- Department of Anatomy, College of Medicine, University of Bisha, 61922, Bisha, Saudi Arabia.
| | | | | | - Jaber Alfaifi
- Department of Child Health, College of Medicine, University of Bisha, 61922, Bisha, Saudi Arabia
| | - Mushabab A ALGhamdi
- Department of Internal Medicine, College of Medicine, University of Bisha, 61922, Bisha, Saudi Arabia
| | - Ahmed Y Al Ameer
- Department of Surgery, College of Medicine, University of Bisha, 61922, Bisha, Saudi Arabia
| | - Amar Ibrahim Omer Yahia
- Department of Pathology, College of Medicine, University of Bisha, 61922, Bisha, Saudi Arabia
| | - Osama A Mohammed
- Department of Pharmacology, College of Medicine, University of Bisha, 61922, Bisha, Saudi Arabia
| | | |
Collapse
|
3
|
Latif R, Alsunni AA. A comparison of the assessments used in campus-based years at the College of Medicine, Imam Abdulrahman Bin Faisal University, Saudi Arabia. Postgrad Med J 2023; 99:1020-1026. [PMID: 36882000 DOI: 10.1093/postmj/qgad005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 11/23/2022] [Indexed: 03/09/2023]
Abstract
STUDY PURPOSE Multiple assessment tools are used to assess future doctors' knowledge, clinical skills, and professional attitudes. In the present research, the difficulty level and discriminating ability of different types of written and performance-based assessments designed to measure the knowledge and competency of medical students were compared. METHODS The assessment data of 2nd & 3rd-year medical students (the academic year 2020-2021) in the College of Medicine at Imam Abdulrahman Bin Faisal University (IAU) were retrospectively reviewed. Based on end-of-the-year overall grades, students were divided into high and low scorers. Both groups were compared by independent sample t-test for their mean scores achieved in each type of assessment. Difficulty level and discriminating ability of the assessments were also explored. MS Excel and Statistical Package for Social Sciences (SPSS version 27) were used for analysis. Area under the curve was calculated through ROC analysis. A p-value <0.05 was believed significant. RESULTS In each type of written assessment, the high scorer group achieved significantly higher scores compared to the low scorers. Among performance-based assignments (except the PBLs), scores did not differ significantly between high and low scorers. The difficulty level of performance-based assessments was "easy" whereas it was "moderate" for written assessments (except the OSCE). The discriminating ability of performance-based assessments was "poor" whereas it was "moderate/excellent" for written assessments (except the OSCE). CONCLUSION Our study results indicate that written assessments have excellent discriminatory ability. However, performance-based assessments are not as difficult and discriminatory as written assessments. The PBLs are relatively discriminatory among all performance-based assessments. Key messages What is already known on this topic At Imam Abdulrahman Bin Faisal University, written and performance-based assessments both are graded on criterion-referenced scales. The student's grades at the end of the year are an aggregate of his/her scores in written and performance-based assessments. What this study adds Our study results show that performance-based assessments are not as difficult and discriminatory in differentiating between high and low scorers as written assessments. How this study might affect research, practice or policy Performance-based assessments should be made a hurdle exam (pass or fail) for the students to move to the next level, or students must pass each assessment component (written and performance-based) separately.
Collapse
Affiliation(s)
- Rabia Latif
- Department of Physiology, College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Ahmed A Alsunni
- Department of Physiology, College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| |
Collapse
|
4
|
Shehata MH, Prabu Kumar A, Al Ansari AM, Deifalla A, Atwa HS. Best Practices of the World Health Organization Collaborating Centres (WHOCCs) in the Eastern Mediterranean Region. ADVANCES IN MEDICAL EDUCATION AND PRACTICE 2022; 13:1199-1205. [PMID: 36212703 PMCID: PMC9532253 DOI: 10.2147/amep.s367834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 06/27/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND World Health Organization Collaborating Centres (WHOCCs) cooperate with the WHO on a range of strategic areas such as nursing, nutrition, mental health, chronic diseases, education, and health technologies, depending on their speciality areas. As of 2021, WHO has 47 CCs in the Eastern Mediterranean Region (EMR) collaborating on diverse areas. Four CCs in the EMR located in Egypt, Kingdom of Bahrain, Sudan, and Pakistan focus primarily on medical education (ME). OBJECTIVE The objective of this review of the literature is to describe the best practices in ME based on published research from the four WHOCCs in EMR. The secondary objective is to classify them based on the level of Kirkpatrick's model (KM) of educational outcomes. METHODS The contributions of WHOCCs are categorised in to five domains namely "Curriculum Development and Course Design", "Student Assessment", "Quality, Accreditation, and Program Evaluation", "Teaching and Learning" and "Innovation in Medical Education". Initial extraction yielded 96 articles for review, while the second level of analysis reduced the number of publications to 37 based on the date of publication within the last 5 years. Numerous best practices in ME emerged from the recently published works of these WHOCCs in the areas of learning and teaching, curriculum development, innovations in medical education, quality, and assessments in medical education. Literature from the WHOCCs on assessment and curriculum design are limited, possibly indicating opportunities for additional research. CONCLUSION The researchers conclude that the WHOCCs in the EMR show transformational impact on all principal areas of research and at multiple levels.
Collapse
Affiliation(s)
- Mohamed Hany Shehata
- Department of Family and Community Medicine, College of Medicine and Medical Sciences, Arabian Gulf University, Manama, Bahrain
- Helwan University Faculty of Medicine, Cairo, Egypt
| | - Archana Prabu Kumar
- Medical Education Unit, College of Medicine and Medical Sciences, Arabian Gulf University, Manama, Bahrain
| | - Ahmed Mohammed Al Ansari
- Medical Education Unit, College of Medicine and Medical Sciences, Arabian Gulf University, Manama, Bahrain
| | - Abdelhalim Deifalla
- Department of Anatomy, College of Medicine and Medical Sciences, Arabian Gulf University, Manama, Bahrain
- Faculty of Medicine, Suez Canal, University, Ismailia, Egypt
| | - Hani Salem Atwa
- Medical Education Unit, College of Medicine and Medical Sciences, Arabian Gulf University, Manama, Bahrain
- Faculty of Medicine, Suez Canal, University, Ismailia, Egypt
| |
Collapse
|
5
|
Belay LM, Sendekie TY, Eyowas FA. Quality of multiple-choice questions in medical internship qualification examination determined by item response theory at Debre Tabor University, Ethiopia. BMC MEDICAL EDUCATION 2022; 22:635. [PMID: 35989323 PMCID: PMC9394015 DOI: 10.1186/s12909-022-03687-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 08/10/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Assessment of cognitive competence is a major element of the internship qualification exam in undergraduate medical education in Ethiopia. Assessing the quality of exam items can help to improve the validity of assessments and assure stakeholders about the accuracy of the go/no decision to the internship. However, we know little about the quality of exam items utilized to ascertain fitness to join the medical internship. Therefore, this study aimed to analyze the quality of multiple-choice questions (MCQs) of the qualification exam administered to final-year medical students at Debre Tabor University (DTU), Ethiopia. METHODS A psychometric study was conducted to assess the qualities of 120 randomly selected MCQs and 407 distractors. Item characteristics were estimated using the item response theory (IRT) model. T-test, one-way ANOVA, and chi-square tests were run to analyze the univariate association between factors. Pearson's correlation test was done to determine the predictive validity of the qualification examination. RESULT Overall, 16, 51, and 33% of the items had high, moderate, and low distractor efficiency, respectively. About two-thirds (65.8%) of the items had two or more functioning distractors and 42.5% exhibited a desirable difficulty index. However, 77.8% of items administered in the qualification examination had a negative or poor discrimination index. Four and five option items didn't show significant differences in psychometric qualities. The qualification exam showed a positive predictive value of success in the national licensing examination (Pearson's correlation coefficient = 0.5). CONCLUSIONS The psychometric properties of the medical qualification exam were inadequate for making valid decisions. Five option MCQs were not better than four options in terms of psychometric qualities. The qualification examination had a positive predictive validity of future performance. High-stakes examination items must be properly created and reviewed before being administered.
Collapse
|
6
|
Chaudhuri JD. An initial preparation for human cadaveric dissection ameliorates the associated mental distress in students. ANATOMICAL SCIENCES EDUCATION 2022; 15:910-927. [PMID: 34143562 DOI: 10.1002/ase.2112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Revised: 05/31/2021] [Accepted: 06/06/2021] [Indexed: 06/12/2023]
Abstract
It is universally recognized that cadaveric dissection is an essential part of anatomy training. However, it has been reported to induce mental distress in some students and impair their intrinsic motivation (IM) to study. One of the postulated reasons for this behavior is the lack of adequate information and preparation of students for cadaveric dissection. Therefore, it is hypothesized that providing relevant information prior to cadaveric dissection will ameliorate the mental distress, enhance the IM of students, and improve their academic performance. A cohort of occupational therapy students enrolled in an anatomy course were psychologically prepared for cadaveric dissection. Students were provided with a curated list of YouTube videos and peer-reviewed journal articles related to cadaveric dissection prior to the commencement of the anatomy course. All students were also required to attend an oral presentation immediately before commencing dissection. The control group included students who had not been provided with any resources in preparation for cadaveric dissection. Compared to the control group, students who had been prepared demonstrated better quality of cadaveric dissection, improved academic performance, reported less mental distress and greater IM. Moreover, students reported the oral presentation to be most relevant and journal articles to be least useful in their preparation. Therefore, this is an effective approach in the amelioration of mental distress and improvement of performance in anatomy students. Consequently, this study represents a paradigm shift in the pedagogy of anatomy, and could represent a vital element in the evolution of a revitalized anatomy curriculum.
Collapse
Affiliation(s)
- Joydeep Dutta Chaudhuri
- School of Occupational Therapy, College of Health Sciences, Husson University, Bangor, Maine, USA
| |
Collapse
|
7
|
Farhat H, Laughton J, Gangaram P, El Aifa K, Khenissi MC, Zaghouani O, Khadhraoui M, Gargouri I, Alinier G. Hazardous material and chemical, biological, radiological, and nuclear incident readiness among prehospital care professionals in the State of Qatar. GLOBAL SECURITY: HEALTH, SCIENCE AND POLICY 2022. [DOI: 10.1080/23779497.2022.2069142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Affiliation(s)
- Hassan Farhat
- Hamad Medical Corporation Ambulance Service Group
- PhD (c) in Health Sciences, Faculty of Medicine of Sousse “Ibn El Jazzar”, University of Sousse, Tunisia
- PhD (c) candidate in Biology, Faculty of Sciences, University of Sfax, Tunisia
| | | | - Padarath Gangaram
- Hamad Medical Corporation Ambulance Service Group
- Honorary Research Fellow, Faculty of Health Sciences, Durban University of Technology, Durban, South Africa
| | - Kawther El Aifa
- Hamad Medical Corporation Ambulance Service Group
- Master of Sciences (c) in Project Management, Lincoln University of Business and Management, Sharjah, the United Arab Emirates in partnership with the University of the West of Scotland, Technology Ave, Blantyre, Glasgow G72 0LH, United Kingdom
| | | | | | - Moncef Khadhraoui
- University of Sfax, Higher Institute of Biotechnology, Sfax-Tunisia, Analytical Chemistry and Environmental Pollution department
| | - Imed Gargouri
- University of Sfax, Faculty of Medicine, Sfax-Tunisia, Occupational Health department
| | - Guillaume Alinier
- Hamad Medical Corporation Ambulance Service Group
- University of Sfax, Higher Institute of Biotechnology, Sfax-Tunisia, Analytical Chemistry and Environmental Pollution department
- University of Hertfordshire, School of Health and Social Work, Hatfield-UK
- Weill Cornell Medicine-Qatar, Doha, Qatar
- Northumbria University, Faculty of Health and Life Sciences, Newcastle upon Tyne, UK
| |
Collapse
|
8
|
Smith EB, Lewis P, Benefield T, Catanzano TM, Khan MJ, Nyberg E, Jordan S. Auditing RadExam: Employing Psychometrics to Improve Exam Quality. Acad Radiol 2021; 28:1389-1398. [PMID: 32674906 DOI: 10.1016/j.acra.2020.05.037] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Revised: 05/12/2020] [Accepted: 05/21/2020] [Indexed: 10/23/2022]
Abstract
INTRODUCTION RadExam is a question item and exam database jointly developed by the Association of Program Directors in Radiology and the American College of Radiology to provide formative resident assessment, offering performance metrics benchmarked against institutional and national resident performance. Beyond resident performance, data is available on question and exam performance. Despite considerable investment in the education and training of its question writers and editors and meticulous attention to current psychometrically validated methods, it was anticipated a minority of exam questions would still perform poorly. Audits were performed to identify these questions, identify reasons for poor performance, and modify or replace so-affected questions. Exam performance was also assessed. METHODS Two audits were performed, the first after the February-May 2018 RadExam pilot phase, and the second nearly 1 year after the full implementation of RadExam. In each audit, RadExam subspecialty editors evaluated all exam questions and exams using statistical data: question and test number of administrations, question p value, question Discrimination Index (DI), question Bloom's taxonomy learning level, exam P-value, and the number of image-based questions in each exam. Identified questions were modified or removed and replaced. RESULTS Audit 1 was performed after the administration of 3114 exams comprised of 2520 questions administered across 100 residency programs. Audit 1 identified 617 questions with DI <0.1 and 565 questions with unacceptable P-values, all of which were modified or replaced. Audit 2 was performed after the administration of 16,416 exams, comprised of 2,507 questions. Audit 2 identified 229 questions with DI <0.1 and 290 questions with unacceptable P-values, representing a 49.1% decrease in total flagged questions compared to Audit 1. Statistically significant decreases were seen in questions with both DI and P-values outside of the desired range across nearly all subspecialties. CONCLUSION The positive impact of our audit system on question and exam performance was reflected in a significant decrease in the number of questions flagged and improved overall exam performance in Audit 2. This illustrates the positive impact of Audit 1.
Collapse
|
9
|
Kusumawati Y, Widyawati W, Dewi FST. Development and Validation of a Survey to Evaluate Mental Health Knowledge: The Case of Indonesian Pregnant Women. Open Access Maced J Med Sci 2021. [DOI: 10.3889/oamjms.2021.5844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
BACKGROUND: Internationally, many instruments have been designed to evaluate mental health knowledge; however, in pregnant women is very limited.
AIM: Therefore, this study aimed to develop and validate a survey to measure the mental health knowledge of pregnant women.
METHODS: In this cross-sectional study, 13 midwives attended the FGD and 10 pregnant women were invited for in-depth interviews to develop an item pool. The content validity was carried out by a panel of 6 experts. The face validity was performed with 5 pregnant women. Next, the construct validity test involved 150 pregnant women who were selected by stratified sampling from 13 public health centers in Surakarta, Indonesia. Analyses were conducted to check content validity, face validity, construct validity, internal consistency reliability, difficulty index, and exploratory factor analysis.
RESULTS: A final 20-item Mental Health Knowledge Scale (MHKS) has a content validity index of 0.97 and a correlation value per item greater than the r-table (i.e., 0.1603). In addition, the MHKS has a Kuder–Richardson 20 reliability coefficient of 0.717. Furthermore, the difficulty index ranged from 0.39 to 0.82 which was considered in the good and acceptable category. Construct validity was confirmed using exploratory factor analysis KMO = 0.713, Bartlett’s test p < 0.001.
CONCLUSION: Based on the findings, the final version of the MHKS was considered a valid and reliable tool. The instrument can be applied to measure the understanding of pregnant women about pregnancy depression. Further studies require adjustment items to other participants regarding mental health knowledge.
Collapse
|
10
|
Bhat SK, Prasad KHL. Item analysis and optimizing multiple-choice questions for a viable question bank in ophthalmology: A cross-sectional study. Indian J Ophthalmol 2021; 69:343-346. [PMID: 33463588 PMCID: PMC7933874 DOI: 10.4103/ijo.ijo_1610_20] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
Purpose Multiple-choice questions (MCQs) are useful in assessing student performance, covering a wide range of topics in an objective way. Its reliability and validity depend upon how well it is constructed. Defective Item detected by item analysis must be looked for item writing flaws and optimized. The aim of this study was to evaluate the MCQs for difficulty levels, discriminating power with functional distractors by item analysis, analyze poor items for writing flaws, and optimize. Methods This was a prospective cross-sectional study involving 120 MBBS students writing formative assessment in Ophthalmology. It comprised 40 single response MCQs as a part of 3-h paper for 20 marks. Items were categorized according to their difficulty index, discrimination index, and distractor efficiency with simple proportions, mean, standard deviation, and correlation. The defective items were analyzed for proper construction and optimized. Results The mean score of the study group was 13.525 ± 2.617. Mean difficulty index, discrimination index, and distractor efficiency were 53.22, 0.26, and 78.32, respectively. Among 40 MCQs, twenty-five MCQs did not have non-functioning distractor; 7 had one, 5 had two, and 3 had three. Of the 20 defective items, 17 were optimized and added to the question bank, two were added without modification, and one was dropped. Conclusion Item analysis is a valuable tool in detecting poor MCQs, and optimizing them is a critical step. The defective items identified should be optimized and not dropped so that the content area covered by the defective item is not kept of the assessment.
Collapse
|
11
|
Srinivasan M. Psychometric Characteristics of Oral Pathology Test Items in the Dental Hygiene Curriculum-A Longitudinal Analysis. Dent J (Basel) 2021; 9:dj9050056. [PMID: 34068053 PMCID: PMC8152459 DOI: 10.3390/dj9050056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 04/19/2021] [Accepted: 04/22/2021] [Indexed: 11/23/2022] Open
Abstract
As the landscape of oral healthcare and the delivery of services continue to undergo change, the dental hygienist plays an increasing role in assisting dentists with oral diagnosis and preventive strategies. Hence, the dental hygiene curriculum standards require biomedical science instructions, including general and oral pathology. Student learning and cognitive competencies are often measured using multiple-choice questions (MCQs). The objectives of this study were to perform a longitudinal analysis of test items and to evaluate their relation to the absolute grades of the oral pathology course in the dental hygiene curriculum. A total of 1033 MCQs covering different concepts of oral pathology administered from 2015 through 2019 were analyzed for difficulty and discriminatory indices, and the differences between the years were determined by one-way ANOVA. Test reliability as determined by the average KR-20 value was 0.7 or higher for each exam. The mean difficulty index for all exams was 0.73 +/− 0.05, and that of the discriminatory index was 0.33 +/− 0.05. Wide variations were observed in the discriminatory indices of test items with approximately the same difficulty index, as well as in the grade distribution in each cohort. Furthermore, longitudinal data analyses identified low achieving cohorts amongst the groups evaluated for the same knowledge domain, taught with the same instruction, and using similar test tools. This suggest that comparative analyses of tests could offer feedback not only on student learning attributes, but also potentially on the admission processes to the dental hygiene program.
Collapse
Affiliation(s)
- Mythily Srinivasan
- Department of Oral Pathology, Medicine and Radiology, Indiana University School of Dentistry, Indiana University Purdue University at Indianapolis, Indianapolis, IN 46202, USA
| |
Collapse
|
12
|
Ghidinelli M, Cunningham M, Monotti IC, Hindocha N, Rickli A, McVicar I, Glyde M. Experiences from Two Ways of Integrating Pre- and Post-course Multiple-choice Assessment Questions in Educational Events for Surgeons. J Eur CME 2021; 10:1918317. [PMID: 34026323 PMCID: PMC8128219 DOI: 10.1080/21614083.2021.1918317] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
To examine how to optimise the integration of multiple-choice questions (MCQs) for learning in continuing professional development (CPD) events in surgery, we implemented and evaluated two methods in two subspecialities over multiple years. The same 12 MCQs were administered pre- and post-event in 66 facial trauma courses. Two different sets of 10 MCQs were administered pre- and post-event in 21 small animal fracture courses. We performed standard psychometric tests on responses from participants who completed both the pre- and post-event assessment. The average difficulty index pre-course was 57% with a discrimination index of 0.20 for small animal fractures and 53% with a discrimination index of 0.15 for facial trauma. For the majority of the individual MCQs, the scores were between 30%-70% and the discrimination index was >0.10. The difficulty index post-course increased in both groups (to 75% and 62%). The pre-course MCQs resulted in an average score in the expected range for both formats suggesting they were appropriate for the intended level of difficulty and an appropriate pre-course learning activity. Post-course completion resulted in increased scores with both formats. Both delivery methods worked well in all regions and overall quality depends on applying a solid item development and validation process.
Collapse
Affiliation(s)
| | - Michael Cunningham
- College of Veterinary Medicine/School of Veterinary and Life Sciences, Murdoch University, Murdoch, Australia
| | - Isobel C Monotti
- College of Veterinary Medicine/School of Veterinary and Life Sciences, Murdoch University, Murdoch, Australia
| | - Nishma Hindocha
- Department of Oral and Maxillofacial Surgery, Rigshospitalet, University Hospital, Copenhagen, Denmark
| | - Alain Rickli
- AO Education Institute, AO Foundation, Duebendorf, Switzerland
| | - Iain McVicar
- Maxillofacial Unit, Queen's Medical Centre, Nottingham University Hospitals NHS Trust, Nottingham, UK
| | - Mark Glyde
- College of Veterinary Medicine/School of Veterinary and Life Sciences, Murdoch University, Murdoch, Australia
| |
Collapse
|
13
|
Chaudhuri JD. Changes in the learning styles and approaches of students following incorporation of drawing during cadaveric dissection. Clin Anat 2020; 34:437-450. [PMID: 32893909 DOI: 10.1002/ca.23673] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Revised: 08/14/2020] [Accepted: 08/26/2020] [Indexed: 11/08/2022]
Abstract
The teaching of anatomy is challenging due to the constraints of material and personnel resources. Research has established that the learning preferences of students are malleable and determined by the requirements of the course. Further, drawing has been reported to aid learning in anatomy by facilitating problem solving and reducing the cognitive overload in students. Considering these issues, the aims of the study were to investigate (a) if positive changes occur in the learning styles and approaches following the incorporation of drawing during cadaveric dissection, and (b) whether they are associated with improved learning outcomes. One cohort of students in an anatomy course received training in creating scientific drawings from dissected human cadavers, while two cohorts of students did not receive such training. The learning preferences of students and their final examination grades were assessed at the commencement and conclusion of the course. Majority of student who had training in drawing transitioned from being bimodal, to trimodal or quadrimodal learners. This was associated with efficient learning approaches and a significant (p < .05) improvement in learning outcomes in these students. There were no changes in any parameters in students who had not received training in drawing. Therefore, the modulation of learning preferences of students through drawing is a pragmatic approach in anatomy teaching.
Collapse
Affiliation(s)
- Joydeep Dutta Chaudhuri
- School of Occupational Therapy, College of Health Sciences, Husson University, Bangor, Maine, USA
| |
Collapse
|
14
|
Macedo EABD, Freitas CCSD, Dionisio AJ, Torres GDV. Knowledge of the care of wounded patients: evidence of validity of an instrument. Rev Bras Enferm 2019; 72:1562-1570. [PMID: 31644745 DOI: 10.1590/0034-7167-2018-0643] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 04/10/2019] [Indexed: 11/22/2022] Open
Abstract
OBJECTIVE to present evidence of validity of an instrument aimed at evaluating the knowledge of nursing students about the care of the wounded patient, according to the difficulty and discrimination indexes of the items. METHOD methodological study conducted in a nursing higher education institution with 117 undergraduate students and 38 professionals from a research group with experience in the area of wounds. For data collection, a questionnaire with 10 multiple choice questions was applied before and after classes on wounds. RESULTS most of the questions presented low level of difficulty and inefficient discrimination index, requiring a revision of the instrument. After two review stages, the difficulty and discrimination indexes of the instruments improved. CONCLUSION an instrument with better evidence of validity was obtained. However, it still requires refinement for later revalidation in the same population.
Collapse
|
15
|
Kowash M, Hussein I, Al Halabi M. Evaluating the Quality of Multiple Choice Question in Paediatric Dentistry Postgraduate Examinations. Sultan Qaboos Univ Med J 2019; 19:e135-e141. [PMID: 31538012 PMCID: PMC6736258 DOI: 10.18295/squmj.2019.19.02.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Revised: 11/19/2018] [Accepted: 01/17/2019] [Indexed: 11/16/2022] Open
Abstract
Objectives This study aimed to evaluate the quality of multiple choice question (MCQ) items in two postgraduate paediatric dentistry (PD) examinations by determining item writing flaws (IWFs), difficulty index (DI) and cognitive level. Methods This study was conducted at Mohamed Bin Rashid University of Medicine and Health Sciences, Dubai, UAE. Virtual platform-based summative versions of the general paediatric medicine (GPM) and prevention of oral diseases (POD) examinations administered during the second semester of the 2017–2018 academic year were used. Two PD faculty members independently reviewed each question to assess IWFs, DI and cognitive level. Results A total of 185 single best answer MCQs with 4–5 options were analysed. Most of the questions (81%) required information recall, with the remainder (19%) requiring higher levels of thinking and data explanation. The most common errors among IWFs were the use of “except” or “not” in the lead-in, tricky or unfocussed stems and opportunities for students to use convergence strategies. There were more IWFs in the GPM than the POD examination, but this was not statistically significant (P = 0.105). The MCQs in the GPM and POD examination were considered easy since the mean DIs (89.1% ± 8.9% and 76.5% ± 7.9%, respectively) were more than 70%. Conclusion Training is an essential element of adequate MCQ writing. A general comprehensive review of all programme’s MCQs is needed to emphasise the importance of avoiding IWFs. A faculty development programme is recommended to improve question-writing skills in order to align examinations with programme learning outcomes and enhance the ability to measure student competency through questions requiring higher level thinking.
Collapse
Affiliation(s)
- Mawlood Kowash
- Department of Paediatric Dentistry, Mohamed Bin Rashid University of Medicine and Health Sciences, Dubai, United Arab Emirates
| | - Iyad Hussein
- Department of Paediatric Dentistry, Mohamed Bin Rashid University of Medicine and Health Sciences, Dubai, United Arab Emirates
| | - Manal Al Halabi
- Department of Paediatric Dentistry, Mohamed Bin Rashid University of Medicine and Health Sciences, Dubai, United Arab Emirates
| |
Collapse
|