1
|
Aprigio I, Dos Santos PPP, Gauer G. International Trauma Questionnaire and Posttraumatic Cognitions Inventory-9: validity evidence and measurement invariance of their Brazilian versions. Psicol Reflex Crit 2024; 37:18. [PMID: 38710873 DOI: 10.1186/s41155-024-00297-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 04/02/2024] [Indexed: 05/08/2024] Open
Abstract
BACKGROUND The International Trauma Questionnaire (ITQ) is used to measure posttraumatic stress disorder (PTSD) and complex posttraumatic stress disorder (CPTSD) symptoms, and the Posttraumatic Cognitions Inventory-9 (PTCI-9) is used to measure posttraumatic cognitions. Both tools have been translated for use in Brazil. However, the psychometric properties of the Brazilian versions were not investigated, and no study has verified the invariance of these tools for many traumatic event types. OBJECTIVE This study examined the validity, reliability, and measurement invariance of the Brazilian versions of the ITQ and the PTCI-9 for trauma type, gender, race, age group, education level, and geographical region. METHODS A total of 2,111 people (67.74% women) participated in an online survey. The scale models were tested via confirmatory factor analyses and measurement invariance through multigroup analyses. Pearson's correlation analyses were used to examine the relationships between PTSD, CPTSD, posttraumatic cognitions, and depressive symptoms. RESULTS Except for the affective dysregulation factor, the reliabilities of the ITQ and PTCI-9 dimensions were adequate. Models with six correlated dimensions for the ITQ and three correlated dimensions for the PTCI-9 showed adequate fit to the data. The ITQ and PTCI-9 exhibited scalar invariance for gender, race, age group, education level, and geographical region. The ITQ also demonstrated full invariance for trauma type. The factors of both instruments were related to each other and to depressive symptoms, with higher effect sizes for posttraumatic cognitions and complex posttraumatic stress disorder symptoms. CONCLUSION We recommend using the Brazilian versions of the ITQ and PTCI-9, which are crucial tools for assessing and treating trauma-related disorders.
Collapse
Affiliation(s)
- Isabelle Aprigio
- Federal University of Rio Grande do Sul (UFRGS), Rua Ramiro Barcelos, 2600, Room 227, Porto Alegre, RS, CEP 91900-410, Brazil.
| | | | - Gustavo Gauer
- Federal University of Rio Grande do Sul (UFRGS), Rua Ramiro Barcelos, 2600, Room 227, Porto Alegre, RS, CEP 91900-410, Brazil
| |
Collapse
|
2
|
Mukhalalati B, Yakti O, Elshami S. A scoping review of the questionnaires used for the assessment of the perception of undergraduate students of the learning environment in healthcare professions education programs. Adv Health Sci Educ Theory Pract 2024:10.1007/s10459-024-10319-1. [PMID: 38683300 DOI: 10.1007/s10459-024-10319-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Accepted: 02/18/2024] [Indexed: 05/01/2024]
Abstract
The learning environment (LE) includes social interactions, organizational culture, structures, and physical and virtual spaces that influence the learning experiences of students. Despite numerous studies exploring the perception of healthcare professional students (HCPS) of their LE, the validity evidence of the utilized questionnaires remains unclear. This scoping review aimed to identify questionnaires used to examine the perception of undergraduate HCPS of their LE and to assess their validity evidence. Five key concepts were used: (1) higher education; (2) questionnaire; (3) LE; (4) perception; and (5) health professions (HP). PubMed, ERIC, ProQuest, and Cochrane databases were searched for studies developing or adapting questionnaires to examine LE. This review employed the APERA standards of validity evidence and Beckman et al. (J Gen Intern Med 20:1159-1164, 2005) interpretation of these standards according to 5 categories: content, internal structure, response process, relation to other variables, and consequences. Out of 41 questionnaires included in this review, the analysis revealed a predominant emphasis on content and internal structure categories. However, less than 10% of the included questionnaires provided information in relation to other variables, consequences, and response process categories. Most of the identified questionnaires received extensive coverage in the fields of medicine and nursing, followed by dentistry. This review identified diverse questionnaires utilized for examining the perception of students of their LE across different HPs. Given the limited validity evidence for existing questionnaires, future research should prioritize the development and validation of psychometric measures. This will ultimately ensure sound and evidence-based quality improvement measures of the LE in HP education programs.
Collapse
Affiliation(s)
- Banan Mukhalalati
- Clinical Pharmacy and Practice Department, College of Pharmacy, QU Health, Qatar University, PO Box 2713, Doha, Qatar.
| | - Ola Yakti
- Clinical Pharmacy and Practice Department, College of Pharmacy, QU Health, Qatar University, PO Box 2713, Doha, Qatar
| | - Sara Elshami
- Clinical Pharmacy and Practice Department, College of Pharmacy, QU Health, Qatar University, PO Box 2713, Doha, Qatar
| |
Collapse
|
3
|
Risgaard AL, Andersen IB, Friis ML, Tolsgaard MG, Danstrup CS. Validating the virtual: a deep dive into ultrasound simulator metrics in otorhinolaryngology. Eur Arch Otorhinolaryngol 2024; 281:1905-1911. [PMID: 38177897 PMCID: PMC10942893 DOI: 10.1007/s00405-023-08421-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 12/16/2023] [Indexed: 01/06/2024]
Abstract
PURPOSE This study aimed to assess the validity of simulation-based assessment of ultrasound skills for thyroid ultrasound. METHODS The study collected validity evidence for simulation-based ultrasound assessment of thyroid ultrasound skills. Experts (n = 8) and novices (n = 21) completed a test containing two tasks and four cases on a virtual reality ultrasound simulator (U/S Mentor's Neck Ultrasound Module). Validity evidence was collected and structured according to Messick's validity framework. The assessments being evaluated included built-in simulator metrics and expert-based evaluations using the Objective Structured Assessment of Ultrasound Skills (OSAUS) scale. RESULTS Out of 64 built-in simulator metrics, 9 (14.1%) exhibited validity evidence. The internal consistency of these metrics was strong (Cronbach's α = 0.805) with high test-retest reliability (intraclass correlation coefficient = 0.911). Novices achieved an average score of 41.9% (SD = 24.3) of the maximum, contrasting with experts at 81.9% (SD = 16.7). Time comparisons indicated minor differences between experts (median: 359 s) and novices (median: 376.5 s). All OSAUS items differed significantly between the two groups. The correlation between correctly entered clinical findings and the OSAUS scores was 0.748 (p < 0.001). The correlation between correctly entered clinical findings and the metric scores was 0.801 (p < 0.001). CONCLUSION While simulation-based training is promising, only 14% of built-in simulator metrics could discriminate between novices and ultrasound experts. Already-established competency frameworks such as OSAUS provided strong validity evidence for the assessment of otorhinolaryngology ultrasound competence.
Collapse
Affiliation(s)
- Anne Line Risgaard
- NordSim, Centre for Skills Training and Simulation, Aalborg University Hospital, Aalborg, Denmark
- Department of Otorhinolaryngology - Head and Neck Surgery, Aalborg University Hospital, Aalborg, Denmark
| | - Iben Bang Andersen
- NordSim, Centre for Skills Training and Simulation, Aalborg University Hospital, Aalborg, Denmark.
- Department of Otorhinolaryngology - Head and Neck Surgery, Aalborg University Hospital, Aalborg, Denmark.
| | - Mikkel Lønborg Friis
- NordSim, Centre for Skills Training and Simulation, Aalborg University Hospital, Aalborg, Denmark
- Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
| | - Martin Grønnebæk Tolsgaard
- Copenhagen Academy for Medical Education and Simulation (CAMES) Rigshospitalet and Center for Fetal Medicine, Copenhagen University Hospital Rigshospitalet, Copenhagen, Denmark
| | - Christian Sander Danstrup
- Department of Otorhinolaryngology - Head and Neck Surgery, Aalborg University Hospital, Aalborg, Denmark
- Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
| |
Collapse
|
4
|
Teslak KE, Post JH, Tolsgaard MG, Rasmussen S, Purup MM, Friis ML. Simulation-based assessment of upper abdominal ultrasound skills. BMC Med Educ 2024; 24:15. [PMID: 38172820 PMCID: PMC10765816 DOI: 10.1186/s12909-023-05018-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 12/28/2023] [Indexed: 01/05/2024]
Abstract
BACKGROUND Ultrasound is a safe and effective diagnostic tool used within several specialties. However, the quality of ultrasound scans relies on sufficiently skilled clinician operators. The aim of this study was to explore the validity of automated assessments of upper abdominal ultrasound skills using an ultrasound simulator. METHODS Twenty five novices and five experts were recruited, all of whom completed an assessment program for the evaluation of upper abdominal ultrasound skills on a virtual reality simulator. The program included five modules that assessed different organ systems using automated simulator metrics. We used Messick's framework to explore the validity evidence of these simulator metrics to determine the contents of a final simulator test. We used the contrasting groups method to establish a pass/fail level for the final simulator test. RESULTS Thirty seven out of 60 metrics were able to discriminate between novices and experts (p < 0.05). The median simulator score of the final simulator test including the metrics with validity evidence was 26.68% (range: 8.1-40.5%) for novices and 85.1% (range: 56.8-91.9%) for experts. The internal structure was assessed by Cronbach alpha (0.93) and intraclass correlation coefficient (0.89). The pass/fail level was determined to be 50.9%. This pass/fail criterion found no passing novices or failing experts. CONCLUSIONS This study collected validity evidence for simulation-based assessment of upper abdominal ultrasound examinations, which is the first step toward competency-based training. Future studies may examine how competency-based training in the simulated setting translates into improvements in clinical performances.
Collapse
Affiliation(s)
- Kristina E Teslak
- NordSim, Center for Skills Training and Simulation, Aalborg University Hospital, Aalborg, Denmark.
| | - Julie H Post
- NordSim, Center for Skills Training and Simulation, Aalborg University Hospital, Aalborg, Denmark
| | - Martin G Tolsgaard
- Copenhagen Academy for Medical Education and Simulation, Rigshospitalet, Copenhagen, Denmark
| | - Sten Rasmussen
- Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
| | - Mathias M Purup
- Department of Radiology, Aalborg University Hospital, Aalborg, Denmark
| | - Mikkel L Friis
- NordSim, Center for Skills Training and Simulation, Aalborg University Hospital, Aalborg, Denmark
| |
Collapse
|
5
|
Zanini DS, Peixoto EM, de Andrade JM, Fernandes IA, da Silva MPP. European health literacy survey questionnaire short form (HLS-Q12): adaptation and evidence of validity for the Brazilian context. Psicol Reflex Crit 2023; 36:25. [PMID: 37672100 PMCID: PMC10482809 DOI: 10.1186/s41155-023-00263-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 07/21/2023] [Indexed: 09/07/2023] Open
Abstract
Health literacy (HL) refers to knowledge, motivation and skills to understand, evaluate and apply health information, enabling appropriate decision making in daily life on health care and health promotion. Studies show that HL is associated with several social determinants, health outcomes, and health promotion. In Brazil, studies on the thematic are still scarce. Thus, the present study aimed to adapt, seek evidence of validity, reliability and estimate the parameters of the items of the European Health Literacy Survey Questionnaire Short Form (HLS-Q12) for the Brazilian context. 770 individuals participated, recruited through advertisements in the media and social networks, 82.1% female, aged between 18 and 83 (M = 35.5, SD = 13.52), from 21 Federative Units of Brazil and the Federal District. The subjects answered the HLS-Q12 and a sociodemographic questionnaire. Exploratory factor analysis indicated a unifactorial structure with good psychometric characteristics (GFI = 0.98; CFI = 0.98; RMSEA = 0.08; RMSR = 0.07). Cronbach's alpha, Guttman's lambda 2 and McDonald's omega reliability indicators were equal to 0.87. We conclude that the HLS-Q12 is an adequate instrument to assess the level of HL in the Brazilian population.
Collapse
|
6
|
Alterio RE, Nagaraj MB, Scott DJ, Tellez J, Radi I, Baker HB, Zeh HJ, Polanco PM. Developing a Robotic Surgery Curriculum: Selection of Virtual Reality Drills for Content Alignment. J Surg Res 2023; 283:726-732. [PMID: 36463811 DOI: 10.1016/j.jss.2022.11.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 09/27/2022] [Accepted: 11/08/2022] [Indexed: 12/04/2022]
Abstract
INTRODUCTION Despite the importance of simulation-based training for robotic surgery, there is no consensus about its training curricula. Recently, a virtual reality (VR) platform (SimNow, Intuitive, Inc) was introduced with 33 VR drills but without evidence of their validity. As part of our creating a new robotic VR curriculum, we assessed the drills' validity through content mapping and the alignment between learning goals and drill content. METHODS Three robotically trained surgeons content-mapped all 33 drills for how well the drills incorporated 15 surgery skills and also rated the drills' difficulty, usefulness, relevance, and uniqueness. Drills were added to the new curriculum based on consensus about ratings and historic learner data. The drills were grouped according to similar skill sets and arranged in order of complexity. RESULTS The 33 drills were judged to have 12/15 surgery skills as primary goals and 13/15 as secondary goals. Twenty of the 33 drills were selected for inclusion in the new curriculum; these had 11/15 skills as primary goals and 11/15 as secondary goals. However, skills regarding energy sources, atraumatic handling, blunt dissection, fine dissection, and running suturing were poorly represented in the drills. Three previously validated inanimate drills were added to the curriculum to address lacking skill domains. CONCLUSIONS We identified 20 of the 33 SimNow drills as a foundation for a robotic surgery curriculum based on content-oriented evidence. We added 3 other drills to address identified gaps in drill content.
Collapse
Affiliation(s)
- Rodrigo E Alterio
- Department of Surgery, University of Texas Southwestern, Dallas, Texas
| | - Madhuri B Nagaraj
- Department of Surgery, University of Texas Southwestern, Dallas, Texas
| | - Daniel J Scott
- Department of Surgery, University of Texas Southwestern, Dallas, Texas; Simulation Center, University of Texas Southwestern, Dallas, Texas
| | - Juan Tellez
- Department of Surgery, University of Texas Southwestern, Dallas, Texas
| | - Imad Radi
- Department of Surgery, University of Texas Southwestern, Dallas, Texas
| | - Hayley B Baker
- Department of Surgery, University of Texas Southwestern, Dallas, Texas
| | - Herbert J Zeh
- Department of Surgery, University of Texas Southwestern, Dallas, Texas
| | | |
Collapse
|
7
|
Engberg M, Mikkelsen S, Hörer T, Lindgren H, Søvik E, Frendø M, Svendsen MB, Lönn L, Konge L, Russell L, Taudorf M. Learning insertion of a Resuscitative Endovascular Balloon Occlusion of the Aorta (REBOA) catheter: Is clinical experience necessary? A prospective trial. Injury 2023; 54:1321-1329. [PMID: 36907823 DOI: 10.1016/j.injury.2023.02.048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 02/08/2023] [Accepted: 02/22/2023] [Indexed: 03/14/2023]
Abstract
BACKGROUND Resuscitative endovascular balloon occlusion of the aorta (REBOA) is an emerging and potentially life-saving procedure, necessitating qualified operators in an increasing number of centres. The procedure shares technical elements with other vascular access procedures using the Seldinger technique, which is mastered by doctors not only in endovascular specialties but also in trauma surgery, emergency medicine, and anaesthesiology. We hypothesised that doctors mastering the Seldinger technique (experienced anaesthesiologist) would learn the technical aspects of REBOA with limited training and remain technically superior to doctors unfamiliar with the Seldinger technique (novice residents) given similar training. METHODS This was a prospective trial of an educational intervention. Three groups of doctors were enroled: novice residents, experienced anaesthesiologists, and endovascular experts. The novices and the anaesthesiologists completed 2.5 h of simulation-based REBOA training. Their skills were tested before and 8-12 weeks after training using a standardised simulated scenario. The endovascular experts, constituting a reference group, were equivalently tested. All performances were video recorded and rated by three blinded experts using a validated assessment tool for REBOA (REBOA-RATE). Performances were compared between groups and with a previously published pass/fail cutoff. RESULTS Sixteen novices, 13 board-certified specialists in anaesthesiology, and 13 endovascular experts participated. Before training, the anaesthesiologists outperformed the novices by 30 percentage points of the maximum REBOA-RATE score (56% (SD 14.0) vs 26% (SD 17%), p<0.01). After training, there was no difference in skills between the two groups (78% (SD 11%) vs 78 (SD 14%), p = 0.93). Neither group reached the endovascular experts' skill level (89% (SD 7%), p<0.05). CONCLUSION For doctors mastering the Seldinger technique, there was an initial inter-procedural transfer of skills advantage when performing REBOA. However, after identical simulation-based training, novices performed equally well to anaesthesiologists, indicating that vascular access experience is not a prerequisite to learning the technical aspects of REBOA. Both groups would need more training to reach technical proficiency.
Collapse
Affiliation(s)
- Morten Engberg
- Copenhagen Academy for Medical Education and Simulation (CAMES), Centre for Human Resources and Education, the Capital Region of Denmark; Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Denmark.
| | - Søren Mikkelsen
- The Mobile Emergency Care Unit, Department of Anaesthesiology and Intensive Care, Odense University Hospital, Odense, Denmark; The Prehospital Research Unit, Region of Southern Denmark, Odense University Hospital, Odense, Denmark; Department of Regional Health Research, University of Southern Denmark, Odense, Denmark
| | - Tal Hörer
- Department of Cardiothoracic and Vascular Surgery and Department of Surgery, Faculty of Life Science, Örebro University Hospital, Örebro, Sweden
| | - Hans Lindgren
- Department of Clinical Sciences, Faculty of Medicine, Lund University, Lund, Sweden; Department of Surgery, Section of Interventional Radiology, Helsingborg Hospital, Helsingborg, Sweden
| | - Edmund Søvik
- Department of Radiology and Nuclear Medicine, St. Olavs University Hospital, Trondheim, Norway
| | - Martin Frendø
- Copenhagen Academy for Medical Education and Simulation (CAMES), Centre for Human Resources and Education, the Capital Region of Denmark; Department of Plastic and Reconstructive Surgery, Copenhagen University Hospital Herlev, Denmark
| | - Morten Bo Svendsen
- Copenhagen Academy for Medical Education and Simulation (CAMES), Centre for Human Resources and Education, the Capital Region of Denmark
| | - Lars Lönn
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Denmark; Department of Radiology, Copenhagen University Hospital Rigshospitalet, Denmark
| | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation (CAMES), Centre for Human Resources and Education, the Capital Region of Denmark; Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Denmark
| | - Lene Russell
- Copenhagen Academy for Medical Education and Simulation (CAMES), Centre for Human Resources and Education, the Capital Region of Denmark; Department of Anaesthesiology and Intensive Care, Copenhagen University Hospital Gentofte, Denmark
| | - Mikkel Taudorf
- Department of Clinical Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Denmark; Department of Radiology, Copenhagen University Hospital Rigshospitalet, Denmark
| |
Collapse
|
8
|
Jacobsen N, Larsen JD, Falster C, Nolsøe CP, Konge L, Graumann O, Laursen CB. Using Immersive Virtual Reality Simulation to Ensure Competence in Contrast-Enhanced Ultrasound. Ultrasound Med Biol 2022; 48:912-923. [PMID: 35227531 DOI: 10.1016/j.ultrasmedbio.2022.01.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 01/11/2022] [Accepted: 01/24/2022] [Indexed: 06/14/2023]
Abstract
Contrast-enhanced ultrasound (CEUS) is used in various medical specialties as a diagnostic imaging tool and for procedural guidance. Experience in the procedure is currently attained via supervised clinical practice that is challenged by patient availability and risks. Prior simulation-based training and subsequent assessment could improve and ensure competence before performance on patients, but no simulator currently exists. Immersive virtual reality (IVR) is a new promising simulation tool that can replicate complex interactions and environments that are unfeasible to achieve by traditional simulators. This study was aimed at developing an IVR simulation-based test for core CEUS competencies and gathering validity evidence for the test in accordance with Messick's framework. The test was developed by IVR software specialists and clinical experts in CEUS and medical education and imitated a CEUS examination of a patient with a focal liver lesion with emphasis on the pre-contrast preparations. Twenty-five medical doctors with varying CEUS experience were recruited as test participants, and their results were used to analyze test quality and to establish a pass/fail standard. The final test of 23 test items had good internal reliability (Cronbach's α = 0.85) and discriminatory abilities. The risks of false positives and negatives (9.1% and 23.6%, respectively) were acceptable for the test to be used as a certification tool prior to supervised clinical training in CEUS.
Collapse
Affiliation(s)
- Niels Jacobsen
- Department of Respiratory Medicine, Odense University Hospital, Odense, Denmark; Odense Respiratory Research Unit (ODIN), Department of Clinical Research, University of Southern Denmark, Odense, Demark; Regional Center for Technical Simulation (TechSim), Odense University Hospital, Odense, Denmark.
| | - Jonas D Larsen
- Odense Respiratory Research Unit (ODIN), Department of Clinical Research, University of Southern Denmark, Odense, Demark; Department of Radiology, Odense University Hospital, Odense, Denmark; Research and Innovation Unit of Radiology, University of Southern Denmark, Odense, Denmark
| | - Casper Falster
- Department of Respiratory Medicine, Odense University Hospital, Odense, Denmark; Odense Respiratory Research Unit (ODIN), Department of Clinical Research, University of Southern Denmark, Odense, Demark
| | - Christian P Nolsøe
- Center for Surgical Ultrasound, Department of Surgery, Zealand University Hospital, Køge, Denmark; Copenhagen Academy for Medical Education and Simulation (CAMES), Center for Human Resources and Education, The Capital Region of Denmark, Copenhagen, Denmark
| | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for Human Resources and Education, The Capital Region of Denmark, Copenhagen, Denmark
| | - Ole Graumann
- Department of Radiology, Odense University Hospital, Odense, Denmark; Research and Innovation Unit of Radiology, University of Southern Denmark, Odense, Denmark
| | - Christian B Laursen
- Department of Respiratory Medicine, Odense University Hospital, Odense, Denmark; Odense Respiratory Research Unit (ODIN), Department of Clinical Research, University of Southern Denmark, Odense, Demark
| |
Collapse
|
9
|
Cullen MW, Klarich KW, Baldwin KM, Engstler GJ, Mandrekar J, Scott CG, Beckman TJ. Validity of a cardiology fellow performance assessment: reliability and associations with standardized examinations and awards. BMC Med Educ 2022; 22:177. [PMID: 35291995 PMCID: PMC8925146 DOI: 10.1186/s12909-022-03239-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 03/03/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Most work on the validity of clinical assessments for measuring learner performance in graduate medical education has occurred at the residency level. Minimal research exists on the validity of clinical assessments for measuring learner performance in advanced subspecialties. We sought to determine validity characteristics of cardiology fellows' assessment scores during subspecialty training, which represents the largest subspecialty of internal medicine. Validity evidence included item content, internal consistency reliability, and associations between faculty-of-fellow clinical assessments and other pertinent variables. METHODS This was a retrospective validation study exploring the domains of content, internal structure, and relations to other variables validity evidence for scores on faculty-of-fellow clinical assessments that include the 10-item Mayo Cardiology Fellows Assessment (MCFA-10). Participants included 7 cardiology fellowship classes. The MCFA-10 item content included questions previously validated in the assessment of internal medicine residents. Internal structure evidence was assessed through Cronbach's α. The outcome for relations to other variables evidence was overall mean of faculty-of-fellow assessment score (scale 1-5). Independent variables included common measures of fellow performance. FINDINGS Participants included 65 cardiology fellows. The overall mean ± standard deviation faculty-of-fellow assessment score was 4.07 ± 0.18. Content evidence for the MCFA-10 scores was based on published literature and core competencies. Cronbach's α was 0.98, suggesting high internal consistency reliability and offering evidence for internal structure validity. In multivariable analysis to provide relations to other variables evidence, mean assessment scores were independently associated with in-training examination scores (beta = 0.088 per 10-point increase; p = 0.05) and receiving a departmental or institutional award (beta = 0.152; p = 0.001). Assessment scores were not associated with educational conference attendance, compliance with completion of required evaluations, faculty appointment upon completion of training, or performance on the board certification exam. R2 for the multivariable model was 0.25. CONCLUSIONS These findings provide sound validity evidence establishing item content, internal consistency reliability, and associations with other variables for faculty-of-fellow clinical assessment scores that include MCFA-10 items during cardiology fellowship. Relations to other variables evidence included associations of assessment scores with performance on the in-training examination and receipt of competitive awards. These data support the utility of the MCFA-10 as a measure of performance during cardiology training and could serve as the foundation for future research on the assessment of subspecialty learners.
Collapse
Affiliation(s)
- Michael W Cullen
- Department of Cardiovascular Medicine, Mayo Clinic, 200 First St. SW, Rochester, Minnesota, 55905, USA.
| | - Kyle W Klarich
- Department of Cardiovascular Medicine, Mayo Clinic, 200 First St. SW, Rochester, Minnesota, 55905, USA
| | - Kristine M Baldwin
- Department of Cardiovascular Medicine, Mayo Clinic, 200 First St. SW, Rochester, Minnesota, 55905, USA
| | - Gregory J Engstler
- Department of Information Services, Mayo Clinic, 55905, 200 First St. SW, Rochester, Minnesota, USA
| | - Jay Mandrekar
- Department of Health Sciences Research, Division of Biomedical Statistics and Informatics, Mayo Clinic, 200 First St. SW, Rochester, Minnesota, 55905, USA
| | - Christopher G Scott
- Department of Health Sciences Research, Division of Biomedical Statistics and Informatics, Mayo Clinic, 200 First St. SW, Rochester, Minnesota, 55905, USA
| | - Thomas J Beckman
- Division of General Internal Medicine, Department of Internal Medicine, Mayo Clinic, 200 First St. SW, Rochester, Minnesota, 55905, USA
| |
Collapse
|
10
|
Jacobsen N, Nolsøe CP, Konge L, Graumann O, Dietrich CF, Sidhu PS, Gilja OH, Meloni MF, Berzigotti A, Harvey CJ, Deganello A, Prada F, Lerchbaumer MH, Laursen CB. Development of and Gathering Validity Evidence for a Theoretical Test in Contrast-Enhanced Ultrasound. Ultrasound Med Biol 2022; 48:248-256. [PMID: 34815128 DOI: 10.1016/j.ultrasmedbio.2021.10.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 10/18/2021] [Accepted: 10/19/2021] [Indexed: 06/13/2023]
Abstract
Contrast-enhanced ultrasound (CEUS) is an imaging modality applied in a broad field of medical specialties for diagnostic uses, guidance during biopsy procedures and ablation therapies and sonoporation therapy. Appropriate training and assessment of theoretical and practical competencies are recommended before practicing CEUS, but no validated assessment tools exist. This study was aimed at developing a theoretical multiple-choice question-based test for core CEUS competencies and gathering validity evidence for the test. An expert team developed the test via a Delphi process. The test was administered to medical doctors with varying CEUS experience, and the results were used to evaluate test items, internal-consistency reliability, ability to distinguish between different proficiency levels and to establish a pass/fail score. Validity evidence was gathered according to Messick's framework. The final test with 47 test items could distinguish between operators with and without CEUS experience with acceptable reliability. The pass/fail score led to considerable risk of false positives and negatives. The test may be used as an entry test before learning practical CEUS competencies but is not recommended for certification purposes because of the risk of false positives and negatives.
Collapse
Affiliation(s)
- Niels Jacobsen
- Department of Respiratory Medicine, Odense University Hospital, Odense, Denmark; Regional Center for Technical Simulation (TechSim), Odense University Hospital, Odense, Denmark; Odense Respiratory Research Unit (ODIN), Department of Clinical Research, University of Southern Denmark, Odense, Demark.
| | - Christian P Nolsøe
- Center for Surgical Ultrasound, Department of Surgery, Zealand University Hospital, Køge, Denmark; Copenhagen Academy for Medical Education and Simulation (CAMES), Center for Human Resources and Education, The Capital Region of Denmark, Copenhagen, Denmark
| | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation (CAMES), Center for Human Resources and Education, The Capital Region of Denmark, Copenhagen, Denmark
| | - Ole Graumann
- Department of Radiology, Odense University Hospital, Odense, Denmark; Research and Innovation Unit of Radiology, University of Southern Denmark, Odense, Denmark
| | - Christoph F Dietrich
- Department of Internal Medicine, Hirslanden Clinic (Beau-Site, Salem-Spital, and Permanence), Bern, Switzerland
| | - Paul S Sidhu
- Department of Radiology, King's College Hospital, Denmark Hill, London, United Kingdom; School of Biomedical Engineering & Imaging Sciences, King's College London, United Kingdom
| | - Odd H Gilja
- National Centre for Ultrasound in Gastroenterology, Haukeland University Hospital, Bergen, Norway; Department of Clinical Medicine, University of Bergen, Bergen, Norway
| | - Maria F Meloni
- Department of Interventional Ultrasound, IGEA S.p.A. Multispecialty Medical Clinic, Milan, Italy; Department of Radiology, University of Wisconsin, Madison, Wisconsin, USA
| | - Annalisa Berzigotti
- Department of Hepatology, University Clinic for Visceral Surgery and Medicine, University Hospital of Bern, University of Bern, Bern, Switzerland
| | - Chris J Harvey
- Department of Imaging, Imperial College NHS Healthcare Trust, Hammersmith Hospital, London, United Kingdom
| | - Annamaria Deganello
- Department of Radiology, King's College Hospital, Denmark Hill, London, United Kingdom; School of Biomedical Engineering & Imaging Sciences, King's College London, United Kingdom
| | - Francesco Prada
- Neurosurgery Unit, Department of Neuroscience, Alessandro Manzoni Hospital, Lecco, Italy; Acoustic Neuroimaging and Therapy Lab, Foundation IRCCS Carlo Besta Neurological Institute, Milan, Italy; Department of Neurological Surgery, University of Virginia Health Science Center, Charlottesville, Virginia, USA; Focused Ultrasound Foundation, Charlottesville, Virginia, USA
| | - Markus H Lerchbaumer
- Charité University Hospital Berlin, Humboldt University of Berlin, Berlin, Germany; Department of Radiology, Berlin Institute of Health, Berlin, Germany
| | - Christian B Laursen
- Department of Respiratory Medicine, Odense University Hospital, Odense, Denmark; Regional Center for Technical Simulation (TechSim), Odense University Hospital, Odense, Denmark; Odense Respiratory Research Unit (ODIN), Department of Clinical Research, University of Southern Denmark, Odense, Demark
| |
Collapse
|
11
|
Gasmalla HE, Wadi M, Taha MH. Twelve tips for introducing the concept of validity argument in assessment to novice medical teachers in a workshop. MedEdPublish (2016) 2021; 10:74. [PMID: 38486553 PMCID: PMC10939636 DOI: 10.15694/mep.2021.000074.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2024] Open
Abstract
This article was migrated. The article was marked as recommended. Background: Misconceptions have been observed in the application of validity by faculty and in the reporting of validity in a significant amount of published work in the field of students' assessment. As a result, actions concerning the dissemination of information about the concept of validity in relation to assessments, especially among novice medical teachers, is needed. Aim: This work aims to guide how the concept of validity argument in assessments is delivered to novice medical teachers in a workshop. Methods: Critical reflection and a careful review of relevant literature were used to develop these tips. Results and Conclusion: Twelve tips were introduced to support instructors conducting workshops on introducing the concept of validity, especially to novice medical teachers.
Collapse
Affiliation(s)
| | - Majed Wadi
- Medical Education Department
- Medical Education Department
| | | |
Collapse
|
12
|
Leon MG, Dinh TA, Heckman MG, Weaver SE, Chase LA, DeStephano CC. Correcting the Fundamentals of Laparoscopic Surgery "Illusion of Validity" in Laparoscopic Vaginal Cuff Suturing. J Minim Invasive Gynecol 2021; 28:1927-1934. [PMID: 34010696 DOI: 10.1016/j.jmig.2021.05.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 05/03/2021] [Accepted: 05/04/2021] [Indexed: 11/18/2022]
Abstract
STUDY OBJECTIVE The "illusion of validity" is a cognitive bias in which the ability to interpret and predict surgical performance accurately is overestimated. To address this bias, we assessed participants comparing fundamentals of laparoscopic surgery (FLS) and non-FLS tasks with cadaveric vaginal cuff suturing to determine the most representative simulation task for laparoscopic vaginal cuff suturing. DESIGN Validity (Messick framework) study comparing FLS and non-FLS tasks with cadaveric vaginal cuff suturing. SETTING Simulation center cadaver laboratory. PARTICIPANTS Obstetrics and gynecology residents (n = 21), minimally invasive gynecologic surgery fellows (n = 3), gynecologic surgical subspecialists (n = 4), general obstetrician/gynecologists (n = 10). INTERVENTIONS Tasks included a simulated vaginal cuff (ipsilateral port placement), needle passage through a metal eyelet loop (contralateral and ipsilateral), and intracorporeal knot tying (contralateral and ipsilateral). Simulation task times were compared with the placement of the first cadaveric vaginal cuff suture time, as well as the in-person and blinded Global Operative Assessment of Laparoscopic Skills (GOALS) score ("relations to other variables" validity evidence). Statistical analyses included Spearman's test of correlation (continuous and ordinal variables) or Wilcoxon rank sum test (categoric variables). MEASUREMENTS AND MAIN RESULTS There was a stronger association with cadaver cuff suturing time for simulated vaginal cuff suturing time (r = 0.73, p <.001) compared with FLS intracorporeal contralateral suturing time (r = 0.54, p <.001). Additional measures associated with cadaveric performance included subspecialty training (median: 82 vs 185 seconds, p = .002), number of total laparoscopic hysterectomies (r = -0.53, p <.001), number of laparoscopic cuff closures (r = -0.61, p <.001), number of simulated laparoscopic suturing experiences (r = -0.51, p <.001), and eyelet contralateral time (r = 0.52, p <.001). Strong agreement between the in-person and blinded GOALS (intraclass correlation coefficient = 0.80) supports response process evidence. Correlations of cadaver cuff time with in-person (Spearman's r = -0.84, p <.001) and blinded GOALS (r = -0.76, p <.001) supports relations to other variables evidence CONCLUSION: The weaker correlation between FLS suturing and cadaver cuff suturing compared with a simulated vaginal cuff model may lead to an "illusion of validity" for assessment in gynecology. Since gynecology specific validity evidence has not been well established for FLS, we recommend prioritizing the use of a simulated vaginal cuff suturing assessment in addition to FLS.
Collapse
Affiliation(s)
- Mateo G Leon
- Department of Medical and Surgical Gynecology (Drs. Leon, Dinh, and DeStephano).
| | - Tri A Dinh
- Department of Medical and Surgical Gynecology (Drs. Leon, Dinh, and DeStephano)
| | | | - Sarah E Weaver
- Department of Obstetrics and Gynecology, University of Florida Health (Dr. Weaver), Jacksonville, Florida
| | - Lori A Chase
- Department of Research Services (Dr. Chase), Mayo Clinic
| | | |
Collapse
|
13
|
Cervilla O, Vallejo-Medina P, Gómez-Berrocal C, Sierra JC. Development of the Spanish short version of Negative Attitudes Toward Masturbation Inventory. Int J Clin Health Psychol 2021; 21:100222. [PMID: 33613675 PMCID: PMC7868927 DOI: 10.1016/j.ijchp.2021.100222] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 12/07/2020] [Indexed: 11/27/2022] Open
Abstract
Background/Objective: Masturbation has historically been a sexual behaviour associated with negative connotations, as a consequence of traditional orthodox positions, despite its positive impact on health. The instruments developed to measure the attitude towards masturbation are scarce, and none of them have been validated in the Spanish adult population. This study aims to propose a short version of the Negative Attitudes Toward Masturbation Inventory (NATMI) and examine their psychometric properties (reliability and evidence of validity) in the Spanish adult population. Method: A total of 4,116 heterosexual adults aged 18-83 years (M = 40.58; SD = 12.24; 54.64% women) participated in the study. In addition to the NATMI, they answered other scales to assess sexual attitudes, sexual desire, propensity to become sexually excited/inhibited and sexual functioning. Results: Analysis of the construct validity of the NATMI resulted in a reduced version of ten items grouped into a single factor explaining 66% of the variance (ordinal alpha = .95). The evidence of validity is clear, as subjects with negative and positive attitude towards masturbation differed in religiousness, frequency of masturbation, erotophilia, positive attitude towards sexual fantasies, sexual inhibition and sexual functioning. Conclusions: The Spanish short version of NATMI provides reliable and valid measures in the Spanish adult population.
Collapse
Affiliation(s)
- Oscar Cervilla
- Mind, Brain, and Behavior Research Center, Universidad de Granada, Granada, Spain
| | | | | | - Juan Carlos Sierra
- Mind, Brain, and Behavior Research Center, Universidad de Granada, Granada, Spain
| |
Collapse
|
14
|
Bhawra J, Kirkpatrick SI, Hall MG, Vanderlee L, Hammond D. Initial Development and Evaluation of the Food Processing Knowledge (FoodProK) Score: A Functional Test of Nutrition Knowledge Based on Level of Processing. J Acad Nutr Diet 2021; 121:1542-50. [PMID: 33612435 DOI: 10.1016/j.jand.2021.01.015] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 01/13/2021] [Accepted: 01/19/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND Existing nutrition knowledge measures tend to be lengthy or tailored for specific contexts, making them unsuitable for population-based surveys. Given the growing emphasis within country-specific dietary guidelines on reducing consumption of highly processed foods, consumers' ability to understand and apply principles related to level of food processing could serve as a proxy measure of general nutrition knowledge. OBJECTIVE To examine the content validity of the Food Processing Knowledge (FoodProK) score based on subject matter expert consultation with Registered dietitian nutritionists (RDNs). METHODS RDNs in Canada (n = 64) completed an online survey, including the FoodProK, in January 2020. Participants rated the "healthiness" of 12 food products from four categories (fruit, meat, dairy, and grains) on a scale from 1 to 10. FoodProK scores were assigned based on concordance of ratings within each food category, with rankings according to the NOVA classification system, with less processed foods representing higher healthiness. For each category, one-way repeated-measures analysis of variance models tested whether the three product ratings were significantly different from one another. Descriptive statistics compared ratings and FoodProK scores across categories. Open-ended feedback was solicited to assess face validity of the score. RESULTS RDNs' FoodProK scores were strongly associated with level of food processing. Almost one in three RDNs received perfect FoodProK scores, and the mean score was 7.0 of 8.0 possible points. Within each category, the three foods received significantly different healthiness ratings, in the same order as the NOVA system (P < 0.001 for all contrasts). Open-ended responses showed that RDNs did not perceive meaningful differences between the processed meat products, suggesting the need to change one of the products in the meat category. Overall, 80% of RDNs reported level of processing as an important indicator of the healthiness of foods. CONCLUSIONS Level of food processing represents a promising framework for assessing general nutrition knowledge in population-based surveys.
Collapse
|
15
|
Blanié A, Amorim MA, Meffert A, Perrot C, Dondelli L, Benhamou D. Assessing validity evidence for a serious game dedicated to patient clinical deterioration and communication. Adv Simul (Lond) 2020; 5:4. [PMID: 32514382 PMCID: PMC7251894 DOI: 10.1186/s41077-020-00123-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2019] [Accepted: 05/14/2020] [Indexed: 11/25/2022] Open
Abstract
Background A serious game (SG) is a useful tool for nurse training. The objectives of this study were to assess validity evidence of a new SG designed to improve nurses’ ability to detect patient clinical deterioration. Methods The SG (LabForGames Warning) was developed through interaction between clinical and pedagogical experts and one developer. For the game study, consenting nurses were divided into three groups: nursing students (pre-graduate) (group S), recently graduated nurses (graduated < 2 years before the study) (group R) and expert nurses (graduated > 4 years before the study and working in an ICU) (group E). Each volunteer played three cases of the game (haemorrhage, brain trauma and obstructed intestinal tract). The validity evidence was assessed following Messick’s framework: content, response process (questionnaire, observational analysis), internal structure, relations to other variables (by scoring each case and measuring playing time) and consequences (a posteriori analysis). Results The content validity was supported by the game design produced by clinical, pedagogical and interprofessional experts in accordance with the French nurse training curriculum, literature review and pilot testing. Seventy-one nurses participated in the study: S (n = 25), R (n = 25) and E (n = 21). The content validity in all three cases was highly valued by group E. The response process evidence was supported by good security control. There was no significant difference in the three groups’ high rating of the game’s realism, satisfaction and educational value. All participants stated that their knowledge of the different steps of the clinical reasoning process had improved. Regarding the internal structure, the factor analysis showed a common source of variance between the steps of the clinical reasoning process and communication or the situational awareness errors made predominantly by students. No statistical difference was observed between groups regarding scores and playing time. A posteriori analysis of the results of final examinations assessing study-related topics found no significant difference between group S participants and students who did not participate in the study. Conclusion While it appears that this SG cannot be used for summative assessment (score validity undemonstrated), it is positively valued as an educational tool. Trial registration ClinicalTrials.gov ID: NCT03092440
Collapse
Affiliation(s)
- Antonia Blanié
- Centre de simulation LabForSIMS, Faculté de médecine Paris Saclay, 94275 Le Kremlin Bicêtre, France.,Département d'Anesthésie-Réanimation chirurgicale, CHU Bicêtre, 94275 Le Kremlin Bicêtre, France.,CIAMS, Université Paris-Saclay, 91405 Orsay Cedex, France.,CIAMS, Université d'Orléans, 45067 Orléans, France
| | - Michel-Ange Amorim
- CIAMS, Université Paris-Saclay, 91405 Orsay Cedex, France.,CIAMS, Université d'Orléans, 45067 Orléans, France
| | - Arnaud Meffert
- Centre de simulation LabForSIMS, Faculté de médecine Paris Saclay, 94275 Le Kremlin Bicêtre, France.,Département d'Anesthésie-Réanimation chirurgicale, CHU Bicêtre, 94275 Le Kremlin Bicêtre, France
| | | | | | - Dan Benhamou
- Centre de simulation LabForSIMS, Faculté de médecine Paris Saclay, 94275 Le Kremlin Bicêtre, France.,Département d'Anesthésie-Réanimation chirurgicale, CHU Bicêtre, 94275 Le Kremlin Bicêtre, France.,CIAMS, Université Paris-Saclay, 91405 Orsay Cedex, France.,CIAMS, Université d'Orléans, 45067 Orléans, France
| |
Collapse
|
16
|
Hawkins M, Cheng C, Elsworth GR, Osborne RH. Translation method is validity evidence for construct equivalence: analysis of secondary data routinely collected during translations of the Health Literacy Questionnaire (HLQ). BMC Med Res Methodol 2020; 20:130. [PMID: 32456680 PMCID: PMC7249296 DOI: 10.1186/s12874-020-00962-8] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Accepted: 03/30/2020] [Indexed: 12/03/2022] Open
Abstract
BACKGROUND Cross-cultural research with patient-reported outcomes measures (PROMs) assumes that the PROM in the target language will measure the same construct in the same way as the PROM in the source language. Yet translation methods are rarely used to qualitatively maximise construct equivalence or to describe the intents of each item to support common understanding within translation teams. This study aimed to systematically investigate the utility of the Translation Integrity Procedure (TIP), in particular the use of item intent descriptions, to maximise construct equivalence during the translation process, and to demonstrate how documented data from the TIP contributes evidence to a validity argument for construct equivalence between translated and source language PROMs. METHODS Analysis of secondary data was conducted on routinely collected data in TIP Management Grids of translations (n = 9) of the Health Literacy Questionnaire (HLQ) that took place between August 2014 and August 2015: Arabic, Czech, French (Canada), French (France), Hindi, Indonesian, Slovak, Somali and Spanish (Argentina). Two researchers initially independently deductively coded the data to nine common types of translation errors. Round two of coding included an identified 10th code. Coded data were compared for discrepancies, and checked when needed with a third researcher for final code allocation. RESULTS Across the nine translations, 259 changes were made to provisional forward translations and were coded into 10 types of errors. Most frequently coded errors were Complex word or phrase (n = 99), Semantic (n = 54) and Grammar (n = 27). Errors coded least frequently were Cultural errors (n = 7) and Printed errors (n = 5). CONCLUSIONS To advance PROM validation practice, this study investigated a documented translation method that includes the careful specification of descriptions of item intents. Assumptions that translated PROMs have construct equivalence between linguistic contexts can be incorrect due to errors in translation. Of particular concern was the use of high level complex words by translators, which, if undetected, could cause flawed interpretation of data from people with low literacy. Item intent descriptions can support translations to maximise construct equivalence, and documented translation data can contribute evidence to justify score interpretation and use of translated PROMS in new linguistic contexts.
Collapse
Affiliation(s)
- Melanie Hawkins
- School of Health and Social Development, Faculty of Health, Deakin University, Geelong, Australia
- Centre for Global Health and Equity, Faculty of Health, Arts and Design, Swinburne University, Postal address: AMDC building, Level 9, Room 907, 453/469-477 Burwood Road, Hawthorn, Australia
| | - Christina Cheng
- School of Health and Social Development, Faculty of Health, Deakin University, Geelong, Australia
| | - Gerald R. Elsworth
- School of Health and Social Development, Faculty of Health, Deakin University, Geelong, Australia
- Centre for Global Health and Equity, Faculty of Health, Arts and Design, Swinburne University, Postal address: AMDC building, Level 9, Room 907, 453/469-477 Burwood Road, Hawthorn, Australia
| | - Richard H. Osborne
- Centre for Global Health and Equity, Faculty of Health, Arts and Design, Swinburne University, Postal address: AMDC building, Level 9, Room 907, 453/469-477 Burwood Road, Hawthorn, Australia
| |
Collapse
|
17
|
Leijte E, Claassen L, Arts E, de Blaauw I, Rosman C, Botden SMBI. Training benchmarks based on validated composite scores for the RobotiX robot-assisted surgery simulator on basic tasks. J Robot Surg 2020; 15:69-79. [PMID: 32314094 PMCID: PMC7875949 DOI: 10.1007/s11701-020-01080-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 04/15/2020] [Indexed: 12/14/2022]
Abstract
The RobotiX robot-assisted virtual reality simulator aims to aid in the training of novice surgeons outside of the operating room. This study aimed to determine the validity evidence on multiple levels of the RobotiX simulator for basic skills. Participants were divided in either the novice, laparoscopic or robotic experienced group based on their minimally invasive surgical experience. Two basic tasks were performed: wristed manipulation (Task 1) and vessel energy dissection (Task 2). The performance scores and a questionnaire regarding the realism, didactic value, and usability were gathered (content). Composite scores (0–100), pass/fail values, and alternative benchmark scores were calculated. Twenty-seven novices, 21 laparoscopic, and 13 robotic experienced participants were recruited. Content validity evidence was scored positively overall. Statistically significant differences between novices and robotic experienced participants (construct) was found for movements left (Task 1 p = 0.009), movements right (Task 1 p = 0.009, Task 2 p = 0.021), path length left (Task 1 p = 0.020), and time (Task 1 p = 0.040, Task 2 p < 0.001). Composite scores were statistically significantly different between robotic experienced and novice participants for Task 1 (85.5 versus 77.1, p = 0.044) and Task 2 (80.6 versus 64.9, p = 0.001). The pass/fail score with false-positive/false-negative percentage resulted in a value of 75/100, 46/9.1% (Task 1) and 71/100, 39/7.0% (Task 2). Calculated benchmark scores resulted in a minority of novices passing multiple parameters. Validity evidence on multiple levels was assessed for two basic robot-assisted surgical simulation tasks. The calculated benchmark scores can be used for future surgical simulation training.
Collapse
Affiliation(s)
- Erik Leijte
- Department of Surgery, Radboud University Medical Center, Geert Grooteplein 10 route 618, 6500HB, Nijmegen, The Netherlands. .,Department of Pediatric Surgery, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Linda Claassen
- Department of Surgery, Radboud University Medical Center, Geert Grooteplein 10 route 618, 6500HB, Nijmegen, The Netherlands
| | - Elke Arts
- Department of Surgery, Radboud University Medical Center, Geert Grooteplein 10 route 618, 6500HB, Nijmegen, The Netherlands
| | - Ivo de Blaauw
- Department of Surgery, Radboud University Medical Center, Geert Grooteplein 10 route 618, 6500HB, Nijmegen, The Netherlands.,Department of Pediatric Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Camiel Rosman
- Department of Surgery, Radboud University Medical Center, Geert Grooteplein 10 route 618, 6500HB, Nijmegen, The Netherlands
| | - Sanne M B I Botden
- Department of Surgery, Radboud University Medical Center, Geert Grooteplein 10 route 618, 6500HB, Nijmegen, The Netherlands.,Department of Pediatric Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
18
|
Hatala R, Gutman J, Lineberry M, Triola M, Pusic M. How well is each learner learning? Validity investigation of a learning curve-based assessment approach for ECG interpretation. Adv Health Sci Educ Theory Pract 2019; 24:45-63. [PMID: 30171512 DOI: 10.1007/s10459-018-9846-x] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Accepted: 07/26/2018] [Indexed: 05/16/2023]
Abstract
Learning curves can support a competency-based approach to assessment for learning. When interpreting repeated assessment data displayed as learning curves, a key assessment question is: "How well is each learner learning?" We outline the validity argument and investigation relevant to this question, for a computer-based repeated assessment of competence in electrocardiogram (ECG) interpretation. We developed an on-line ECG learning program based on 292 anonymized ECGs collected from an electronic patient database. After diagnosing each ECG, participants received feedback including the computer interpretation, cardiologist's annotation, and correct diagnosis. In 2015, participants from a single institution, across a range of ECG skill levels, diagnosed at least 60 ECGs. We planned, collected and evaluated validity evidence under each inference of Kane's validity framework. For Scoring, three cardiologists' kappa for agreement on correct diagnosis was 0.92. There was a range of ECG difficulty across and within each diagnostic category. For Generalization, appropriate sampling was reflected in the inclusion of a typical clinical base rate of 39% normal ECGs. Applying generalizability theory presented unique challenges. Under the Extrapolation inference, group learning curves demonstrated expert-novice differences, performance increased with practice and the incremental phase of the learning curve reflected ongoing, effortful learning. A minority of learners had atypical learning curves. We did not collect Implications evidence. Our results support a preliminary validity argument for a learning curve assessment approach for repeated ECG interpretation with deliberate and mixed practice. This approach holds promise for providing educators and researchers, in collaboration with their learners, with deeper insights into how well each learner is learning.
Collapse
Affiliation(s)
- Rose Hatala
- Department of Medicine, St. Paul's Hospital, University of British Columbia, Suite 5907, Burrard Bldg, 1081 Burrard St, Vancouver, BC, V6Z 1Y6, Canada.
| | - Jacqueline Gutman
- Institute for Innovations in Medical Education, New York University School of Medicine, New York, NY, USA
| | - Matthew Lineberry
- Zamierowski Institute for Experiential Learning, University of Kansas Medical Center and Health System, Kansas City, KS, USA
| | - Marc Triola
- Institute for Innovations in Medical Education, New York University School of Medicine, New York, NY, USA
| | - Martin Pusic
- Institute for Innovations in Medical Education, New York University School of Medicine, New York, NY, USA
- Ronald O. Perelman Department of Emergency Medicine, New York University School of Medicine, New York, NY, USA
| |
Collapse
|
19
|
Hanley K, Gillespie C, Zabar S, Adams J, Kalet A. Monitoring communication skills progress of medical students: Establishing a baseline has value, predicting the future is difficult. Patient Educ Couns 2019; 102:309-315. [PMID: 30318384 DOI: 10.1016/j.pec.2018.09.010] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2017] [Revised: 07/26/2018] [Accepted: 09/08/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVE To provide evidence for the validity of an Introductory Clinical Experience (ICE) that was implemented as a baseline assessment of medical students' clinical communication skills to support progression of skills over time. METHODS In this longitudinal study of communication skills, medical students completed the ICE, then a Practice of Medicine (POM) Objective Structured Clinical Exam 8 months later, and the Comprehensive Clinical Skills Exam (CCSE) 25 months later. At each experience, trained Standardized Patients assessed students, using the same behaviorally anchored checklist in 3 domains: Information Gathering, Relationship Development, and Patient Education and Counseling (PEC) with good internal reliability (.70-.87). Skills development patterns were described. ICE as a predictor of later performance was explored. Students' perspectives were elicited. RESULTS 140 (80%) medical students consented to include their data in this study. Overall communication scores increased over time (eta2 = .17, medium effect) mostly attributable to increase in PEC skills (eta2 = .48, large effect), in 4 patterns. ICE and POM scores predicted future communication skills. Most students recognized the educational value of ICE. CONCLUSION Entering medical students' clinical communication skills increase over time on average and may predict future performance. PRACTICE IMPLICATIONS Implementing an ICE is likely a valid strategy for monitoring progress and facilitating communication skills development.
Collapse
Affiliation(s)
- Kathleen Hanley
- Department of Medicine, New York University School of Medicine, New York, USA
| | - Colleen Gillespie
- Department of Medicine, New York University School of Medicine, New York, USA; Institute for Innovations in Medical Education, New York University School of Medicine, New York, USA
| | - Sondra Zabar
- Department of Medicine, New York University School of Medicine, New York, USA
| | - Jennifer Adams
- Department of Medicine, New York University School of Medicine, New York, USA
| | - Adina Kalet
- Department of Medicine, New York University School of Medicine, New York, USA.
| |
Collapse
|
20
|
Yazbeck Karam V, Park YS, Tekian A, Youssef N. Evaluating the validity evidence of an OSCE: results from a new medical school. BMC Med Educ 2018; 18:313. [PMID: 30572876 PMCID: PMC6302424 DOI: 10.1186/s12909-018-1421-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 12/05/2018] [Indexed: 06/09/2023]
Abstract
BACKGROUND To prevent the problems of traditional clinical evaluation, the "Objective Structured Clinical Examination (OSCE)" was presented by Harden as a more valid and reliable assessment instrument. However, an essential condition to guarantee a high-quality and effective OSCE is the assurance of evidence to support the validity of its scores. This study examines the psychometric properties of OSCE scores, with an emphasis on consequential and internal structure validity evidence. METHODS Fifty-three first year medical students took part in a summative OSCE at the Lebanese American University-School of Medicine. Evidence to support consequential validity was gathered by using criterion-based standard setting methods. Internal structure validity evidence was gathered by examining various psychometric measures both at the station level and across the complete OSCE. RESULTS Compared to our actual method of computing results, the introduction of standard setting resulted in lower students' average grades and a higher cut score. Across stations, Cronbach's alpha was moderately low. CONCLUSION Gathering consequential and internal structure validity evidence by multiple metrics provides support for or against the quality of an OSCE. It is critical that this analysis be performed routinely on local iterations of given tests, and the results used to enhance the quality of assessment.
Collapse
Affiliation(s)
- Vanda Yazbeck Karam
- Lebanese American University-School of Medicine, P.O. Box: 113288, Zahar Street, Beirut, Lebanon
| | - Yoon Soo Park
- Department of Medical Education, University of Illinois, Chicago, USA
| | - Ara Tekian
- Department of Medical Education, University of Illinois, Chicago, USA
| | - Nazih Youssef
- Lebanese American University-School of Medicine, P.O. Box: 113288, Zahar Street, Beirut, Lebanon
| |
Collapse
|
21
|
Johnson J, Schwartz A, Lineberry M, Rehman F, Park YS. Development, administration, and validity evidence of a subspecialty preparatory test toward licensure: a pilot study. BMC Med Educ 2018; 18:176. [PMID: 30068394 PMCID: PMC6090864 DOI: 10.1186/s12909-018-1294-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2018] [Accepted: 07/25/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND Trainees in medical subspecialties lack validated assessment scores that can be used to prepare for their licensing examination. This paper presents the development, administration, and validity evidence of a constructed-response preparatory test (CRPT) administered to meet the needs of nephrology trainees. METHODS Learning objectives from the licensing examination were used to develop a test blueprint for the preparatory test. Messick's unified validity framework was used to gather validity evidence for content, response process, internal structure, relations to other variables, and consequences. Questionnaires were used to gather data on the trainees' perception of examination preparedness, item clarity, and curriculum adequacy. RESULTS There were 10 trainees and 5 faculty volunteers who took the test. The majority of trainees passed the constructed-response preparatory test. However, many scored poorly on items assessing renal pathology and physiology knowledge. We gathered the following five sources of validity evidence: (1) Content: CRPT mapped to the licensing examination blueprint, with items demonstrating clarity and range of difficulty; (2) Response process: moderate rater agreement (intraclass correlation = .58); (3) Internal structure: sufficient reliability based on generalizability theory (G-coefficient = .76 and Φ-coefficient = .53); (4) Relations to other variables: CRPT scores reflected years of exposure in nephrology and clinical practice; (5) Consequences: post-assessment survey revealed that none of the test takers felt "poorly prepared" for the upcoming summative examination and that their studying would increase in duration and be adapted in terms of content focus. CONCLUSIONS Preparatory tests using constructed response items mapped to licensure examination blueprint can be developed and used at local program settings to help prepare learners for subspecialty licensure examinations. The CRPT and questionnaire data identified shortcomings of the nephrology training program curriculum. Following the preparatory test, trainees expressed an improved sense of preparedness for their licensing examination.
Collapse
Affiliation(s)
- John Johnson
- London Health Sciences Centre-University Hospital, Western University, 339 Windermere Road, London, ON N6A 5A5 Canada
| | - Alan Schwartz
- Medical Education, University of Illinois at Chicago, Chicago, IL USA
| | | | - Faisal Rehman
- London Health Sciences Centre-University Hospital, Western University, 339 Windermere Road, London, ON N6A 5A5 Canada
| | - Yoon Soo Park
- Medical Education, University of Illinois at Chicago, Chicago, IL USA
| |
Collapse
|
22
|
Till H, Ker J, Myford C, Stirling K, Mires G. Constructing and evaluating a validity argument for the final-year ward simulation exercise. Adv Health Sci Educ Theory Pract 2015; 20:1263-1289. [PMID: 25808311 DOI: 10.1007/s10459-015-9601-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2014] [Accepted: 03/13/2015] [Indexed: 06/04/2023]
Abstract
The authors report final-year ward simulation data from the University of Dundee Medical School. Faculty who designed this assessment intend for the final score to represent an individual senior medical student's level of clinical performance. The results are included in each student's portfolio as one source of evidence of the student's capability as a practitioner, professional, and scholar. Our purpose in conducting this study was to illustrate how assessment designers who are creating assessments to evaluate clinical performance might develop propositions and then collect and examine various sources of evidence to construct and evaluate a validity argument. The data were from all 154 medical students who were in their final year of study at the University of Dundee Medical School in the 2010-2011 academic year. To the best of our knowledge, this is the first report on an analysis of senior medical students' clinical performance while they were taking responsibility for the management of a simulated ward. Using multi-facet Rasch measurement and a generalizability theory approach, we examined various sources of validity evidence that the medical school faculty have gathered for a set of six propositions needed to support their use of scores as measures of students' clinical ability. Based on our analysis of the evidence, we would conclude that, by and large, the propositions appear to be sound, and the evidence seems to support their proposed score interpretation. Given the body of evidence collected thus far, their intended interpretation seems defensible.
Collapse
Affiliation(s)
- Hettie Till
- Centre for Medical Education, School of Medicine, University of Dundee, Dundee, UK.
- , 11 Van Riebeeck Street, Franschhoek, 7690, South Africa.
| | - Jean Ker
- School of Medicine, University of Dundee, Dundee, UK
| | - Carol Myford
- Department of Educational Psychology, College of Education, University of Illinois at Chicago, Chicago, IL, USA
| | - Kevin Stirling
- Clinical Skills Centre, School of Medicine, University of Dundee, Dundee, UK
| | - Gary Mires
- School of Medicine, University of Dundee, Dundee, UK
| |
Collapse
|
23
|
Pecorelli N, Fiore JF, Gillis C, Awasthi R, Mappin-Kasirer B, Niculiseanu P, Fried GM, Carli F, Feldman LS. The six-minute walk test as a measure of postoperative recovery after colorectal resection: further examination of its measurement properties. Surg Endosc 2015; 30:2199-206. [PMID: 26310528 DOI: 10.1007/s00464-015-4478-1] [Citation(s) in RCA: 61] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2015] [Accepted: 07/28/2015] [Indexed: 12/21/2022]
Abstract
INTRODUCTION Patients, clinicians and researchers seek an easy, reproducible and valid measure of postoperative recovery. The six-minute walk test (6MWT) is a low-cost measure of physical function, which is a relevant dimension of recovery. The aim of the present study was to contribute further evidence for the validity of the 6MWT as a measure of postoperative recovery after colorectal surgery. METHODS This study involved a sample of 174 patients enrolled in three previous randomized controlled trials. Construct validity was assessed by testing the hypotheses that the distance walked in 6 min (6MWD) at 4 weeks after surgery is greater (1) in younger versus older patients, (2) in patients with higher preoperative physical status versus lower, (3) after laparoscopic versus open surgery, (4) in patients without postoperative complications versus with postoperative complications; and that 6MWD (5) correlates cross-sectionally with self-reported physical activity as measured with a questionnaire (CHAMPS). Statistical analysis was performed using linear regression and Spearman's correlation. The COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist was used to guide the formulation of hypotheses and reporting of results. RESULTS One hundred and fifty-one patients who completed the 6MWT at 4 weeks after surgery were included in the analysis. All hypotheses tested for construct validity were supported by the data. Older age, poorer physical status, open surgery and occurrence of postoperative complications were associated with clinically relevant reduction in 6MWD (>19 m). There was a moderate positive correlation between 6MWD and patient-reported physical activity (r = 0.46). CONCLUSIONS This study contributes further evidence for the construct validity of the 6MWT as a measure of postoperative recovery after colorectal surgery. Results from this study support the use of the 6MWT as an outcome measure in studies evaluating interventions aimed to improve postoperative recovery.
Collapse
Affiliation(s)
- Nicolò Pecorelli
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650 Cedar Ave, L9.309, Montreal, QC, H3G 1A4, Canada
| | - Julio F Fiore
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650 Cedar Ave, L9.309, Montreal, QC, H3G 1A4, Canada
| | - Chelsia Gillis
- Department of Anesthesia, McGill University Health Centre, Montreal, QC, Canada
| | - Rashami Awasthi
- Department of Anesthesia, McGill University Health Centre, Montreal, QC, Canada
| | - Benjamin Mappin-Kasirer
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650 Cedar Ave, L9.309, Montreal, QC, H3G 1A4, Canada
| | - Petru Niculiseanu
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650 Cedar Ave, L9.309, Montreal, QC, H3G 1A4, Canada
| | - Gerald M Fried
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650 Cedar Ave, L9.309, Montreal, QC, H3G 1A4, Canada.,Department of Surgery, McGill University Health Centre, Montreal, QC, Canada
| | - Francesco Carli
- Department of Anesthesia, McGill University Health Centre, Montreal, QC, Canada
| | - Liane S Feldman
- Steinberg-Bernstein Centre for Minimally Invasive Surgery and Innovation, McGill University Health Centre, 1650 Cedar Ave, L9.309, Montreal, QC, H3G 1A4, Canada. .,Department of Surgery, McGill University Health Centre, Montreal, QC, Canada.
| |
Collapse
|