1
|
A comparison of the assessments used in campus-based years at the College of Medicine, Imam Abdulrahman Bin Faisal University, Saudi Arabia. Postgrad Med J 2023; 99:1020-1026. [PMID: 36882000 DOI: 10.1093/postmj/qgad005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 11/23/2022] [Indexed: 03/09/2023]
Abstract
STUDY PURPOSE Multiple assessment tools are used to assess future doctors' knowledge, clinical skills, and professional attitudes. In the present research, the difficulty level and discriminating ability of different types of written and performance-based assessments designed to measure the knowledge and competency of medical students were compared. METHODS The assessment data of 2nd & 3rd-year medical students (the academic year 2020-2021) in the College of Medicine at Imam Abdulrahman Bin Faisal University (IAU) were retrospectively reviewed. Based on end-of-the-year overall grades, students were divided into high and low scorers. Both groups were compared by independent sample t-test for their mean scores achieved in each type of assessment. Difficulty level and discriminating ability of the assessments were also explored. MS Excel and Statistical Package for Social Sciences (SPSS version 27) were used for analysis. Area under the curve was calculated through ROC analysis. A p-value <0.05 was believed significant. RESULTS In each type of written assessment, the high scorer group achieved significantly higher scores compared to the low scorers. Among performance-based assignments (except the PBLs), scores did not differ significantly between high and low scorers. The difficulty level of performance-based assessments was "easy" whereas it was "moderate" for written assessments (except the OSCE). The discriminating ability of performance-based assessments was "poor" whereas it was "moderate/excellent" for written assessments (except the OSCE). CONCLUSION Our study results indicate that written assessments have excellent discriminatory ability. However, performance-based assessments are not as difficult and discriminatory as written assessments. The PBLs are relatively discriminatory among all performance-based assessments. Key messages What is already known on this topic At Imam Abdulrahman Bin Faisal University, written and performance-based assessments both are graded on criterion-referenced scales. The student's grades at the end of the year are an aggregate of his/her scores in written and performance-based assessments. What this study adds Our study results show that performance-based assessments are not as difficult and discriminatory in differentiating between high and low scorers as written assessments. How this study might affect research, practice or policy Performance-based assessments should be made a hurdle exam (pass or fail) for the students to move to the next level, or students must pass each assessment component (written and performance-based) separately.
Collapse
|
2
|
Evaluation of difficulty index of impacted mandibular third molar extractions. J Adv Pharm Technol Res 2022; 13:S98-S101. [PMID: 36643150 PMCID: PMC9836111 DOI: 10.4103/japtr.japtr_362_22] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 06/10/2022] [Accepted: 07/04/2022] [Indexed: 01/17/2023] Open
Abstract
When compared to other teeth, third molars have a greater rate of impaction. Third molars that have been impacted are commonly encountered in dental practice, and it is the reason for complications in third molar surgery. The most commonly performed surgical procedure by dental practitioners is the third molar extraction. Despite a well-planned surgical approach, there are complications in lower third molar extractions. This study analyzes the expected difficulty during surgical removal of lower third molars that are impacted. This study analyzes the expected difficulty during the removal of impacted lower third molars by surgery. With the data from our dental institution database, the difficulty index by Pederson was used to evaluate the difficulty level of the extraction. Using SPSS, data were analyzed and results were obtained. Among impacted left mandibular third molars (38), minimal difficulty in 20.60% of the extractions, moderate difficulty in 29.58% of the extractions, and most difficulty in 2.77% of extractions were present. Among impacted right mandibular third molars (48), minimal difficulty in 18.80% of the extractions, moderate difficulty in 25.78% of the extractions, and most difficulty in 2.47% of extractions were present. According to our study, there is moderate difficulty in impacted lower third molar surgery, and it depends on factors such as systemic status and patient's age, periodontal condition, and complexity of tooth position in the dental arch.
Collapse
|
3
|
Item analysis and optimizing multiple-choice questions for a viable question bank in ophthalmology: A cross-sectional study. Indian J Ophthalmol 2021; 69:343-346. [PMID: 33463588 PMCID: PMC7933874 DOI: 10.4103/ijo.ijo_1610_20] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
Purpose Multiple-choice questions (MCQs) are useful in assessing student performance, covering a wide range of topics in an objective way. Its reliability and validity depend upon how well it is constructed. Defective Item detected by item analysis must be looked for item writing flaws and optimized. The aim of this study was to evaluate the MCQs for difficulty levels, discriminating power with functional distractors by item analysis, analyze poor items for writing flaws, and optimize. Methods This was a prospective cross-sectional study involving 120 MBBS students writing formative assessment in Ophthalmology. It comprised 40 single response MCQs as a part of 3-h paper for 20 marks. Items were categorized according to their difficulty index, discrimination index, and distractor efficiency with simple proportions, mean, standard deviation, and correlation. The defective items were analyzed for proper construction and optimized. Results The mean score of the study group was 13.525 ± 2.617. Mean difficulty index, discrimination index, and distractor efficiency were 53.22, 0.26, and 78.32, respectively. Among 40 MCQs, twenty-five MCQs did not have non-functioning distractor; 7 had one, 5 had two, and 3 had three. Of the 20 defective items, 17 were optimized and added to the question bank, two were added without modification, and one was dropped. Conclusion Item analysis is a valuable tool in detecting poor MCQs, and optimizing them is a critical step. The defective items identified should be optimized and not dropped so that the content area covered by the defective item is not kept of the assessment.
Collapse
|
4
|
All India AYUSH post graduate entrance exam 2019 - AYURVEDA MCQ item analysis. J Ayurveda Integr Med 2021; 12:356-358. [PMID: 33752948 PMCID: PMC8185990 DOI: 10.1016/j.jaim.2021.01.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 01/20/2021] [Accepted: 01/29/2021] [Indexed: 11/26/2022] Open
Abstract
Background AIAPGET 2019, an all India ranking entrance Test for MD/MS courses of Ayurveda, Unani, Siddha and Homeopathy stream was conducted by joint collaboration of National Testing Agency (NTA) and All India Institute of Ayurveda (AIIA). In this article, we present the item analysis of AIAPGET 2019 Ayurveda stream MCQs. Objectives The aim of this article was to analyse the MCQs of AIAPGET 2019 of Ayurveda stream. Materials and methods This exam was computer based conducted all over 25 centers across India. The question paper had 100 MCQs with 1 correct answer and 3 distractors for each item (Problem statement). Results AIAPGET 2019 question paper of Ayurveda stream had a Difficulty index of 37.32 ± 16.11 Discriminatory Index of 0.46 ± 0.27 and Distractor Index of 89 ± 17.8. Conclusion Our analysis showed that though ideal, the question paper trended towards difficulty side.
Collapse
|
5
|
Item analysis of multiple choice questions: A quality assurance test for an assessment tool. Med J Armed Forces India 2021; 77:S85-S89. [PMID: 33612937 DOI: 10.1016/j.mjafi.2020.11.007] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Accepted: 11/10/2020] [Indexed: 11/23/2022] Open
Abstract
Background The item analysis of multiple choice questions (MCQs) is an essential tool that can provide input on validity and reliability of items. It helps to identify items which can be revised or discarded, thus building a quality MCQ bank. Methods The study focussed on item analysis of 90 MCQs of three tests conducted for 150 first year Bachelor of Medicine and Bachelor of Surgery (MBBS) physiology students. The item analysis explored the difficulty index (DIF I) and discrimination index (DI) with distractor effectiveness (DE). Statistical analysis was performed by using MS Excel 2010 and SPSS, version 20.0. Results Of total 90 MCQs, the majority, that is, 74 (82%) MCQs had a good/acceptable level of difficulty with a mean DIF I of 55.32 ± 7.4 (mean ± SD), whereas seven (8%) were too difficult and nine (10%) were too easy. A total of 72 (80%) items had an excellent to acceptable DI and 18 (20%) had a poor DI with an overall mean DI of 0.31 ± 0.12. There was significant weak correlation between DIF I and DI (r = 0.140, p < .0001). The mean DE was 32.35 ± 31.3 with 73% functional distractors in all. The reliability measure of test items by Cronbach alpha was 0.85 and Kuder-Richardson Formula 20 was 0.71, which is good. The standard error of measurement was 1.22. Conclusion Our study helped teachers identify good and ideal MCQs which can be part of the question bank for future and those MCQs which needed revision. We recommend that item analysis must be performed for all MCQ-based assessments to determine validity and reliability of the assessment.
Collapse
|
6
|
Digital Assessment of Difficulty in Impacted Mandibular Third Molar Extraction. J Maxillofac Oral Surg 2020; 19:401-406. [PMID: 32801535 PMCID: PMC7410943 DOI: 10.1007/s12663-019-01265-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Accepted: 07/22/2019] [Indexed: 10/26/2022] Open
Abstract
PURPOSE In recent era of computer and software technology, it is necessary to introduce software which helps in routine assessment of surgical procedures practiced in oral surgery. Removal of impacted third molar is a common procedure. It is hard to evaluate factors that complicate removal of impacted third molars because of the large variation among patients and the difficulty in creating a study design. In this article, we have described about our newly designed software developed in order to assess the difficulty in extracting impacted mandibular third molars accurately, thereby reducing the bias faced during the assessment of difficulty in removing impacted mandibular third molar. MATERIALS AND METHOD A software is designed using C# computer language and Windows Presentation Foundation Framework. RESULTS The measurements and angulations are accurately calculated by this software which helps to bring about uniformity in results, thus minimizing the bias during clinical as well as study purposes. CONCLUSION Mandibular third molar difficulty level calculator can be useful software for dental practitioners in day-to-day practice. Dental students and professionals should be made aware of this software so as to utilize it to the utmost possible level.
Collapse
|
7
|
Evaluation of the effect of items' format and type on psychometric properties of sixth year pharmacy students clinical clerkship assessment items. BMC MEDICAL EDUCATION 2020; 20:190. [PMID: 32532278 PMCID: PMC7291500 DOI: 10.1186/s12909-020-02107-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 06/08/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND Examinations are the traditional assessment tools. In addition to measurement of learning, exams are used to guide the improvement of academic programs. The current study attempted to evaluate the quality of assessment items of sixth year clinical clerkships examinations as a function of assessment items format and type/structure and to assess the effect of the number of response choices on the characteristics of MCQs as assessment items. METHODS A total of 173 assessment items used in the examinations of sixth year clinical clerkships of a PharmD program were included. Items were classified as case based or noncase based and as MCQs or open-ended. The psychometric characteristics of the items were studied as a function of the Bloom's levels addressed, item format, and number of choices in MCQs. RESULTS Items addressing analysis skills were more difficult. No differences were found between case based and noncase based items in terms of their difficulty, with a slightly better discrimination in the latter. Open-ended items were easier, yet more discriminative. MCQs with higher number of options were easier. Open-ended questions were significantly more discriminative in comparison to MCQs as case based items while they were more discriminative as noncase based items. CONCLUSION Item formats, structure, and number of options in MCQs significantly affected the psychometric properties of the studied items. Noncase based items and open-ended items were easier and more discriminative than case based items and MCQs, respectively. Examination items should be prepared considering the above characteristics to improve their psychometric properties and maximize their usefulness.
Collapse
|
8
|
The relationship between classical item characteristics and item response time on computer-based testing. KOREAN JOURNAL OF MEDICAL EDUCATION 2019; 31:1-9. [PMID: 30852856 PMCID: PMC6589631 DOI: 10.3946/kjme.2019.113] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Revised: 12/31/2018] [Accepted: 01/09/2019] [Indexed: 05/22/2023]
Abstract
PURPOSE This study investigated the relationship between the item response time (iRT) and classic item analysis indicators obtained from computer-based test (CBT) results and deduce students' problem-solving behavior using the relationship. METHODS We retrospectively analyzed the results of the Comprehensive Basic Medical Sciences Examination conducted for 5 years by a CBT system in Dankook University College of Medicine. iRT is defined as the time spent to answer the question. The discrimination index and the difficulty level were used to analyze the items using classical test theory (CTT). The relationship of iRT and the CTT were investigated using a correlation analysis. An analysis of variance was performed to identify the difference between iRT and difficulty level. A regression analysis was conducted to examine the effect of the difficulty index and discrimination index on iRT. RESULTS iRT increases with increasing difficulty index, and iRT tends to decrease with increasing discrimination index. The students' effort is increased when they solve difficult items but reduced when they are confronted with items with a high discrimination. The students' test effort represented by iRT was properly maintained when the items have a 'desirable' difficulty and a 'good' discrimination. CONCLUSION The results of our study show that an adequate degree of item difficulty and discrimination is required to increase students' motivation. It might be inferred that with the combination of CTT and iRT, we can gain insights about the quality of the examination and test behaviors of the students, which can provide us with more powerful tools to improve them.
Collapse
|
9
|
Quantitative analysis of single best answer multiple choice questions in pharmaceutics. CURRENTS IN PHARMACY TEACHING & LEARNING 2019; 11:251-257. [PMID: 30904146 DOI: 10.1016/j.cptl.2018.12.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2018] [Revised: 09/06/2018] [Accepted: 12/05/2018] [Indexed: 06/09/2023]
Abstract
INTRODUCTION The purpose of this study was to: (1) analyze the quality of single best answer multiple choice questions (MCQs) used in pharmaceutics exams, (2) identify the correlation between difficulty index (DIF I), discriminating index (DI), and distractor efficiency (DE), and (3) understand the relationship between DIF I, DI, and DE and the number of MCQ answer options and their cognitive level. METHODS 429 MCQs used in pharmaceutics exams were analyzed. The quality of the MCQs was evaluated using DIF I, DI, and DE. The number of answer options and the cognitive level tested by each item were evaluated. Relationships between DIF I, DI, and DE were measured using Pearson's correlations and t-tests. RESULTS DIF I showed a significant negative correlation with DI within questions that measured information recall. A significant negative correlation between DIF I and DI was observed in questions with four and five answer options regardless of the cognitive level measured. The highest DI values were found in moderate difficulty questions, while the worst DE was observed for the easiest questions. Questions that measured analytical and problem-solving abilities were more difficult than those measuring information recall. Questions with four and five answer options had excellent discrimination. CONCLUSIONS Single best answer MCQs are a valuable assessment tool capable of evaluating higher cognitive skills. Significant correlation between DIF I and DI can indicate the examination quality. Higher quality MCQs are constructed using four and five answer options.
Collapse
|
10
|
Difficulty scoring system in laparoscopic distal pancreatectomy. JOURNAL OF HEPATO-BILIARY-PANCREATIC SCIENCES 2018; 25:489-497. [PMID: 30118575 DOI: 10.1002/jhbp.578] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
BACKGROUND Several factors affect the level of difficulty of laparoscopic distal pancreatectomy (LDP). The purpose of this study was to develop a difficulty scoring (DS) system to quantify the degree of difficulty in LDP. METHODS We collected clinical data for 80 patients who underwent LDP. A 10-level difficulty index was developed and subcategorized into a three-level difficulty index; 1-3 as low, 4-6 as intermediate, and 7-10 as high index. The automatic linear modeling (LINEAR) statistical tool was used to identify factors that significantly increase level of difficulty in LDP. RESULTS The operator's 10-level DS concordance between the 10-level DS by the reviewers, LINEAR index DS, and clinical index DS systems were analyzed, and the weighted Cohen's kappa statistic were at 0.869, 0.729, and 0.648, respectively, showing good to excellent inter-rater agreement. We identified five factors significantly affecting level of difficulty in LDP; type of operation, resection line, proximity of tumor to major vessel, tumor extension to peripancreatic tissue, and left-sided portal hypertension/splenomegaly. CONCLUSIONS This novel DS for LDP adequately quantified the degree of difficulty, and can be useful for selecting patients for LDP, in conjunction with fitness for surgery and prognosis.
Collapse
|
11
|
Are Multiple Choice Questions for Post Graduate Dental Entrance Examinations Spot On?-Item Analysis of MCQs in Prosthodontics in India. J Natl Med Assoc 2017; 110:455-458. [PMID: 30129514 DOI: 10.1016/j.jnma.2017.11.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Revised: 10/17/2017] [Accepted: 11/13/2017] [Indexed: 11/29/2022]
Abstract
BACKGROUND Construction of appropriate test items is a challenge in preparing quality multiple choice questions. Item analysis provides valuable feedback data on validity of multiple choice questions. The present study was conducted to evaluate the difficulty index, discrimination index and distracter efficiency of the items present in the multiple choice questions of post graduate dental entrance examinations. METHODS A list consisting of 20 MCQs was taken from the entrance exam books of MCQs on an introductory topic and administered to 104 undergraduate students. RESULTS In the present study 15% of the MCQs related to impression making procedure were difficult with difficulty index (p) less than 30%, 15% were poor discriminators and 55% had at least one non-functional distracter. CONCLUSION Item analysis of MCQs in post graduate entrance examinations demonstrated low difficulty index, discrimination index and distracter efficiency. Hence, we propose a strong need for faculty training in test constructors and their post validation.
Collapse
|
12
|
Abstract
Objective To analyze the psychometric indices of Anatomy question items in modular system assessment. Methods A quantitative study was done to determine the quality of MCQs and to analyze the performance of 1st year 100 MBBS students. Each module covers different subjects of MBBS curriculum but psychometric analysis was done on the subject of Anatomy only. The assessment results of 3 modules were taken and checked by item analysis to see the mean differences between the modules using ANOVA. Post hoc analysis was determined by using Tukey HSD test. Results A total of 140 one best (OB) Anatomy MCQ items were calculated for difficulty index, discriminatory index and reliability. Difficulty index was found to be higher in module I when compared with module II and III. Discriminatory index comparatively showed higher results in module II whereas reliability of module III was significantly higher than the other modules. Results were considered to be significant with p value≤ 0.05. Conclusions The psychometric analysis of Anatomy MCQs showed average difficulty, good discrimination and reliability.
Collapse
|
13
|
Abstract
Background: Multiple choice questions (MCQs) are a common method of assessment of medical students. The quality of MCQs is determined by three parameters such as difficulty index (DIF I), discrimination index (DI), and distracter efficiency (DE). Objectives: The objective of this study is to assess the quality of MCQs currently in use in pharmacology and discard the MCQs which are not found useful. Materials and Methods: A class test of central nervous system unit was conducted in the Department of Pharmacology. This test comprised 50 MCQs/items and 150 distracters. A correct response to an item was awarded one mark with no negative marking for incorrect response. Each item was analyzed for three parameters such as DIF I, DI, and DE. Results: DIF of 38 (76%) items was in the acceptable range (P = 30–70%), 11 (22%) items were too easy (P > 70%), and 1 (2%) item was too difficult (P < 30%). DI of 31 (62%) items was excellent (d > 0.35), of 12 (24%) items was good (d = 0.20–0.34), and of 7 (14%) items was poor (d < 0.20). A total of 50 items had 150 distracters. Among these, 27 (18%) were nonfunctional distracters (NFDs) and 123 (82%) were functional distracters. Items with one NFD were 11 and with two NFDs were 8. Based on these parameters, 6 items were discarded, 17 were revised, and 27 were kept for subsequent use. Conclusion: Item analysis is a valuable tool as it helps us to retain the valuable MCQs and discard the items which are not useful. It also helps in increasing our skills in test construction and identifies the specific areas of course content which need greater emphasis or clarity.
Collapse
|
14
|
The development and validation of a test of science critical thinking for fifth graders. SPRINGERPLUS 2015; 4:741. [PMID: 26640753 PMCID: PMC4661166 DOI: 10.1186/s40064-015-1535-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2015] [Accepted: 11/12/2015] [Indexed: 11/10/2022]
Abstract
The paper described the development and validation of the Test of Science Critical Thinking (TSCT) to measure the three critical thinking skill constructs: comparing and contrasting, sequencing, and identifying cause and effect. The initial TSCT consisted of 55 multiple choice test items, each of which required participants to select a correct response and a correct choice of critical thinking used for their response. Data were obtained from a purposive sampling of 30 fifth graders in a pilot study carried out in a primary school in Sabah, Malaysia. Students underwent the sessions of teaching and learning activities for 9 weeks using the Thinking Maps-aided Problem-Based Learning Module before they answered the TSCT test. Analyses were conducted to check on difficulty index (p) and discrimination index (d), internal consistency reliability, content validity, and face validity. Analysis of the test–retest reliability data was conducted separately for a group of fifth graders with similar ability. Findings of the pilot study showed that out of initial 55 administered items, only 30 items with relatively good difficulty index (p) ranged from 0.40 to 0.60 and with good discrimination index (d) ranged within 0.20–1.00 were selected. The Kuder–Richardson reliability value was found to be appropriate and relatively high with 0.70, 0.73 and 0.92 for identifying cause and effect, sequencing, and comparing and contrasting respectively. The content validity index obtained from three expert judgments equalled or exceeded 0.95. In addition, test–retest reliability showed good, statistically significant correlations (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\rm r}=0.76, P< 0.01$$\end{document}r=0.76,P<0.01). From the above results, the selected 30-item TSCT was found to have sufficient reliability and validity and would therefore represent a useful tool for measuring critical thinking ability among fifth graders in primary science.
Collapse
|
15
|
Abstract
Introduction George Winter attempted to assess the depth and difficulty of extracting impacted mandibular wisdom molars by describing three imaginary lines drawn on an intra-oral radiograph. Of these lines, the red line is the only one which is measured and great importance is attached to its actual length. Method The authors of this short paper describe the difficulty in drawing this red line accurately through examples. Conclusion The authors believe that Winter's lines and their interpretation are only of historical value and have no place in contemporary texts on oral surgery.
Collapse
|
16
|
A novel difficulty scoring system for laparoscopic liver resection. JOURNAL OF HEPATO-BILIARY-PANCREATIC SCIENCES 2014; 21:745-53. [PMID: 25242563 DOI: 10.1002/jhbp.166] [Citation(s) in RCA: 346] [Impact Index Per Article: 34.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Early on, laparoscopic liver resection (LLR) was limited to partial resection, but major LLR is no longer rare. A difficulty scoring system is required to guide surgeons in advancing from simple to highly technical laparoscopic resections. Subjects were 90 patients who had undergone pure LLR at three medical institutions (30 patients/institution) from January 2011 to April 2014. Surgical difficulty was assessed by the operator using an index of 1-10 with the following divisions: 1-3 low difficulty, 4-6 intermediate difficulty, and 7-10 high difficulty. Weighted kappa statistic was used to calculate the concordance between the operators' and reviewers' (expert surgeon) difficulty index. Inter-rater agreement (weighted kappa statistic) between the operators' and reviewers' assessments was 0.89 with the three-level difficulty index and 0.80 with the 10-level difficulty index. A 10-level difficulty index by linear modeling based on clinical information revealed a weighted kappa statistic of 0.72 and that scored by the extent of liver resection, tumor location, tumor size, liver function, and tumor proximity to major vessels revealed a weighted kappa statistic of 0.68. We proposed a new scoring system to predict difficulty of various LLRs preoperatively. The calculated score well reflected difficulty.
Collapse
|
17
|
Item and Test Analysis to Identify Quality Multiple Choice Questions (MCQs) from an Assessment of Medical Students of Ahmedabad, Gujarat. Indian J Community Med 2014; 39:17-20. [PMID: 24696535 PMCID: PMC3968575 DOI: 10.4103/0970-0218.126347] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2013] [Accepted: 07/23/2013] [Indexed: 11/04/2022] Open
Abstract
BACKGROUND Multiple choice questions (MCQs) are frequently used to assess students in different educational streams for their objectivity and wide reach of coverage in less time. However, the MCQs to be used must be of quality which depends upon its difficulty index (DIF I), discrimination index (DI) and distracter efficiency (DE). OBJECTIVE To evaluate MCQs or items and develop a pool of valid items by assessing with DIF I, DI and DE and also to revise/ store or discard items based on obtained results. SETTINGS Study was conducted in a medical school of Ahmedabad. MATERIALS AND METHODS An internal examination in Community Medicine was conducted after 40 hours teaching during 1(st) MBBS which was attended by 148 out of 150 students. Total 50 MCQs or items and 150 distractors were analyzed. STATISTICAL ANALYSIS Data was entered and analyzed in MS Excel 2007 and simple proportions, mean, standard deviations, coefficient of variation were calculated and unpaired t test was applied. RESULTS Out of 50 items, 24 had "good to excellent" DIF I (31 - 60%) and 15 had "good to excellent" DI (> 0.25). Mean DE was 88.6% considered as ideal/ acceptable and non functional distractors (NFD) were only 11.4%. Mean DI was 0.14. Poor DI (< 0.15) with negative DI in 10 items indicates poor preparedness of students and some issues with framing of at least some of the MCQs. Increased proportion of NFDs (incorrect alternatives selected by < 5% students) in an item decrease DE and makes it easier. There were 15 items with 17 NFDs, while rest items did not have any NFD with mean DE of 100%. CONCLUSION Study emphasizes the selection of quality MCQs which truly assess the knowledge and are able to differentiate the students of different abilities in correct manner.
Collapse
|
18
|
Palatal fistulae: a comprehensive classification and difficulty index. J Maxillofac Oral Surg 2013; 13:305-9. [PMID: 25018605 DOI: 10.1007/s12663-013-0535-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2012] [Accepted: 05/15/2013] [Indexed: 10/26/2022] Open
Abstract
INTRODUCTION Palatal fistula formation is a known complication of palatoplasty. Numerous classifications have been proposed that help in identifying the location of fistula and systematically arrange data for record keeping. They do not assess the difficulty level of the fistula. Management of fistulae can be very tricky and a definitive success cannot be guaranteed even in the best of hands. Hence we devised a classification system and a difficulty index to help evaluate the difficulty level and plan the treatment accordingly to predict the prognosis prior to surgery. MATERIALS AND METHODS We reviewed 610 cases of palatal fistula operated at our center with a minimum follow-up of 6 months from May 2003 to May 2010. They were classified according to our classification. Difficulty index was also assessed. The data was tabulated and analysed. RESULTS Longitudinal fistulae showed a recurrence rate of 7.87 % whereas transverse fistulae showed a recurrence rate of 19.66 %. Total recurrence rate was 11.31 %. Unilateral clefts with fistulae showed a recurrence of 6.55 % whereas bilateral clefts with fistulae showed a recurrence of 14.17 %. A total of 220 Grade 1 and 390 Grade 2 fistulae were managed. Out of these, 7 (3.18 %) Grade 1 and 62 (15.90 %) Grade 2 fistulae recurred. 90 % of failed fistulae showed decrease in the size of the fistula. CONCLUSION Classification and evaluation of difficulty of palatal fistula is essential to plan the surgical treatment so as to give better results. Bidimensional fistulae in the anterior hard palate are associated with higher recurrence rate. Also, fistulae in bilateral clefts are more difficult to close than those in unilateral clefts. Classification of fistulae according to the difficulty index helps in pre-operative judgment of the outcome.
Collapse
|