1
|
Anselmi P, Colledani D, Andreotti A, Robusto E, Fabbris L, Vian P, Genetti B, Mortali C, Minutillo A, Mastrobattista L, Pacifici R. An Item Response Theory-Based Scoring of the South Oaks Gambling Screen-Revised Adolescents. Assessment 2021; 29:1381-1391. [PMID: 34036842 DOI: 10.1177/10731911211017657] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The South Oaks Gambling Screen-Revised Adolescent (SOGS-RA) is one of the most widely used screening tools for problem gambling among adolescents. In this study, item response theory was used for computing measures of problem gambling severity that took into account how much information the endorsed items provided about the presence of problem gambling. A zero-inflated mixture two-parameter logistic model was estimated on the responses of 4,404 adolescents to the South Oaks Gambling Screen-Revised Adolescent to compute the difficulty and discrimination of each item, and the problem gambling severity level (θ score) of each respondent. Receiver operating characteristic curve analysis was used to identify the cutoff on the θ scores that best distinguished daily and nondaily gamblers. This cutoff outperformed the common cutoff defined on the sum scores in identifying daily gamblers but fell behind it in identifying nondaily gamblers. When screening adolescents to be subjected to further investigations, the cutoff on the θ scores must be preferred to that on the sum scores.
Collapse
|
2
|
Vidotto G, Anselmi P, Robusto E. New Perspectives in Computing the Point of Subjective Equality Using Rasch Models. Front Psychol 2020; 10:2793. [PMID: 31920838 PMCID: PMC6927926 DOI: 10.3389/fpsyg.2019.02793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Accepted: 11/27/2019] [Indexed: 11/13/2022] Open
Abstract
In psychophysics, the point of subject equality (PSE) is any of the points along a stimulus dimension at which a variable stimulus (visual, tactile, auditory, and so on) is judged by an observer to be equal to a standard stimulus. Rasch models have been found to offer a valid solution for computing the PSE when the method of constant stimuli is applied in the version of the method of transitions. The present work provides an overview of the procedures for computing the PSE using Rasch models and proposes some new developments. An adaptive procedure is described that allows for estimating the PSE of an observer without presenting him/her with all stimuli pairs. This procedure can be particularly useful in those situations in which psychophysical conditions of the individuals require that the number of trials is limited. Moreover, it allows for saving time that can be used to scrutinize the results of the experiment or to run other experiments. Also, the possibility of using Rasch-based fit statistics for identifying observers who gave unexpected judgments is explored. They could be individuals who, instead of carefully evaluating the presented stimuli pairs, gave random, inattentive, or careless responses, or gave the same response to many consecutive stimuli pairs. Otherwise, they could be atypical and clinically relevant individuals who deserve further investigation. The aforementioned developments are implemented using procedures and statistics that are well established in the framework of Rasch models. In particular, computerized adaptive testing procedures are used for efficiently estimating the PSE of the observers, whereas infit and outfit mean-squares statistics are used for detecting observers who gave unexpected judgments. Results of the analyses carried out on simulated data sets suggest that the proposed developments can be used in psychophysical experiments.
Collapse
Affiliation(s)
- Giulio Vidotto
- Department of General Psychology, University of Padua, Padova, Italy
| | - Pasquale Anselmi
- Department of Philosophy, Sociology, Education and Applied Psychology, University of Padua, Padova, Italy
| | - Egidio Robusto
- Department of Philosophy, Sociology, Education and Applied Psychology, University of Padua, Padova, Italy
| |
Collapse
|
3
|
Anselmi P, Colledani D, Robusto E. A Comparison of Classical and Modern Measures of Internal Consistency. Front Psychol 2019; 10:2714. [PMID: 31866905 PMCID: PMC6904350 DOI: 10.3389/fpsyg.2019.02714] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Accepted: 11/18/2019] [Indexed: 11/29/2022] Open
Abstract
Three measures of internal consistency - Kuder-Richardson Formula 20 (KR20), Cronbach's alpha (α), and person separation reliability (R) - are considered. KR20 and α are common measures in classical test theory, whereas R is developed in modern test theory and, more precisely, in Rasch measurement. These three measures specify the observed variance as the sum of true variance and error variance. However, they differ for the way in which these quantities are obtained. KR20 uses the error variance of an "average" respondent from the sample, which overestimates the error variance of respondents with high or low scores. Conversely, R uses the actual average error variance of the sample. KR20 and α use respondents' test scores in calculating the observed variance. This is potentially misleading because test scores are not linear representations of the underlying variable, whereas calculation of variance requires linearity. Contrariwise, if the data fit the Rasch model, the measures estimated for each respondent are on a linear scale, thus being numerically suitable for calculating the observed variance. Given these differences, R is expected to be a better index of internal consistency than KR20 and α. The present work compares the three measures on simulated data sets with dichotomous and polytomous items. It is shown that all the estimates of internal consistency decrease with the increasing of the skewness of the score distribution, with R decreasing to a larger extent. Thus, R is more conservative than KR20 and α, and prevents test users from believing a test has better measurement characteristics than it actually has. In addition, it is shown that Rasch-based infit and outfit person statistics can be used for handling data sets with random responses. Two options are described. The first one implies computing a more conservative estimate of internal consistency. The second one implies detecting individuals with random responses. When there are a few individuals with a consistent number of random responses, infit and outfit allow for correctly detecting almost all of them. Once these individuals are removed, a "cleaned" data set is obtained that can be used for computing a less biased estimate of internal consistency.
Collapse
Affiliation(s)
- Pasquale Anselmi
- Department of Philosophy, Sociology, Education and Applied Psychology, University of Padua, Padua, Italy
| | | | | |
Collapse
|
4
|
Colledani D, Anselmi P, Robusto E. Development of a new abbreviated form of the Eysenck Personality Questionnaire-Revised with multidimensional item response theory. PERSONALITY AND INDIVIDUAL DIFFERENCES 2019. [DOI: 10.1016/j.paid.2019.05.044] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
5
|
van der Meulen MW, Smirnova A, Heeneman S, Oude Egbrink MGA, van der Vleuten CPM, Lombarts KMJMH. Exploring Validity Evidence Associated With Questionnaire-Based Tools for Assessing the Professional Performance of Physicians: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2019; 94:1384-1397. [PMID: 31460937 DOI: 10.1097/acm.0000000000002767] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE To collect and examine-using an argument-based validity approach-validity evidence of questionnaire-based tools used to assess physicians' clinical, teaching, and research performance. METHOD In October 2016, the authors conducted a systematic search of the literature seeking articles about questionnaire-based tools for assessing physicians' professional performance published from inception to October 2016. They included studies reporting on the validity evidence of tools used to assess physicians' clinical, teaching, and research performance. Using Kane's validity framework, they conducted data extraction based on four inferences in the validity argument: scoring, generalization, extrapolation, and implications. RESULTS They included 46 articles on 15 tools assessing clinical performance and 72 articles on 38 tools assessing teaching performance. They found no studies on research performance tools. Only 12 of the tools (23%) gathered evidence on all four components of Kane's validity argument. Validity evidence focused mostly on generalization and extrapolation inferences. Scoring evidence showed mixed results. Evidence on implications was generally missing. CONCLUSIONS Based on the argument-based approach to validity, not all questionnaire-based tools seem to support their intended use. Evidence concerning implications of questionnaire-based tools is mostly lacking, thus weakening the argument to use these tools for formative and, especially, for summative assessments of physicians' clinical and teaching performance. More research on implications is needed to strengthen the argument and to provide support for decisions based on these tools, particularly for high-stakes, summative decisions. To meaningfully assess academic physicians in their tripartite role as doctor, teacher, and researcher, additional assessment tools are needed.
Collapse
Affiliation(s)
- Mirja W van der Meulen
- M.W. van der Meulen is PhD candidate, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-3636-5469. A. Smirnova is PhD graduate and researcher, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands, and member, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0003-4491-3007. S. Heeneman is professor, Department of Pathology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-6103-8075. M.G.A. oude Egbrink is professor, Department of Physiology, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0002-5530-6598. C.P.M. van der Vleuten is professor, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, School of Health Professions Education, Maastricht University, Maastricht, The Netherlands; ORCID: https://orcid.org/0000-0001-6802-3119. K.M.J.M.H. Lombarts is professor, Professional Performance Research Group, Medical Psychology, Academic Medical Center, University of Amsterdam, Amsterdam, The Netherlands; ORCID: https://orcid.org/0000-0001-6167-0620
| | | | | | | | | | | |
Collapse
|
6
|
Rossi Ferrario S, Panzeri A, Anselmi P, Vidotto G. Development and psychometric properties of a short form of the Illness Denial Questionnaire. Psychol Res Behav Manag 2019; 12:727-739. [PMID: 31686929 PMCID: PMC6709814 DOI: 10.2147/prbm.s207622] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Accepted: 06/17/2019] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Coping with chronic illness can be overwhelming for patients and caregivers, and may be inhibited by the denial mechanism, and therefore, denial represents a critical issue for health professionals. Assessing illness denial is far from easy, and brief tools suitable for medical settings are lacking. In this paper, the development of a short form of the Illness Denial Questionnaire (IDQ) for patients and caregivers is presented. METHODS In study 1, the IDQ was administered to 118 patients and 83 caregivers to examine the internal structure of denial; then the properties of the items (DIF, fit, and difficulty) were evaluated according to the Rasch model in order to select the best items for the Illness Denial Questionnaire-Short Form (IDQ-SF). Study 2 included 202 participants (113 patients and 89 caregivers). The internal structure of the IDQ-SF was tested via confirmatory factor analysis (CFA). Reliability and concurrent validity were also studied using the Anxiety and Depression Questionnaire-Reduced Form (AD-R). RESULTS The CFA showed a two-factor structure encompassing "Denial of negative emotions" and "Resistance to change". Results of the Rasch analyses led to the selection of 4 items for each dimension. The resulting IDQ-SF (8 items) showed a two-factor structure as well as good reliability and concurrent validity with AD-R. CONCLUSION The IDQ-SF represents a valid tool for quickly evaluating the core of illness denial in patients and caregivers. This brief and easily administrable questionnaire allows health professionals to outline the presence and severity of illness denial in order to set individually tailored interventions.
Collapse
Affiliation(s)
- Silvia Rossi Ferrario
- Psychology and Neuropsychology Unit, Istituti Clinici Scientifici Maugeri, Veruno, Italy
| | - Anna Panzeri
- Psychology and Neuropsychology Unit, Istituti Clinici Scientifici Maugeri, Veruno, Italy
- Department of General Psychology, University of Padova, Padova, Italy
| | - Pasquale Anselmi
- Department of Philosophy, Sociology, Education and Applied Psychology, University of Padova, Padova, Italy
| | - Giulio Vidotto
- Department of General Psychology, University of Padova, Padova, Italy
| |
Collapse
|
7
|
Colledani D, Anselmi P, Robusto E. Using multidimensional item response theory to develop an abbreviated form of the Italian version of Eysenck's IVE questionnaire. PERSONALITY AND INDIVIDUAL DIFFERENCES 2019. [DOI: 10.1016/j.paid.2019.01.032] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
8
|
Colledani D, Anselmi P, Robusto E. Using Item Response Theory for the Development of a New Short Form of the Eysenck Personality Questionnaire-Revised. Front Psychol 2018; 9:1834. [PMID: 30356840 PMCID: PMC6190847 DOI: 10.3389/fpsyg.2018.01834] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2018] [Accepted: 09/07/2018] [Indexed: 12/03/2022] Open
Abstract
The present work aims at developing a new version of the short form of the Eysenck Personality Questionnaire-Revised, which includes Psychoticism, Extraversion, Neuroticism, and Lie scales (48 items, 12 per scale). The work consists of two studies. In the first one, an item response theory model was estimated on the responses of 590 individuals to the full-length version of the questionnaire (100 items). The analyses allowed the selection of 48 items well discriminating and distributed along the latent continuum of each trait, and without misfit and differential item functioning. In the second study, the functioning of the new form of the questionnaire was evaluated in a different sample of 300 individuals. Results of the two studies show that reliability of the four scales is better than, or equal to that of the original forms. The new version outperforms the original one in approximating scores of the full-length questionnaire. Moreover, convergent validity coefficients and relations with clinical constructs were consistent with literature.
Collapse
Affiliation(s)
- Daiana Colledani
- Department of Philosophy, Sociology, Education and Applied Psychology, School of Psychology, University of Padova, Padova, Italy
| | | | | |
Collapse
|
9
|
Vidotto G, Anselmi P, Filipponi L, Tommasi M, Saggino A. Using Overt and Covert Items in Self-Report Personality Tests: Susceptibility to Faking and Identifiability of Possible Fakers. Front Psychol 2018; 9:1100. [PMID: 30018582 PMCID: PMC6037895 DOI: 10.3389/fpsyg.2018.01100] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Accepted: 06/11/2018] [Indexed: 11/13/2022] Open
Abstract
Self-report personality tests widely used in clinical, medical, forensic, and organizational areas of psychological assessment are susceptible to faking. Several approaches have been developed to prevent or detect faking, which are based on the use of faking warnings, ipsative items, social desirability scales, and validity scales. The approach proposed in this work deals with the use of overt items (the construct is clear to test-takers) and covert items (the construct is obscure to test-takers). Covert items are expected to be more resistant to faking than overt items. Two hundred sixty-seven individuals were presented with an alexithymia scale. Two experimental conditions were considered. Respondents in the faking condition were asked to reproduce the profile of an alexithymic individual, whereas those in the sincere condition were not asked to exhibit a particular alexithymia profile. The items of the scale were categorized as overt or covert by expert psychotherapists and analyzed through Rasch models. Respondents in the faking condition were able to exhibit measures of alexithymia in the required direction. This occurred for both overt and covert items, but to a greater extent for overt items. Differently from overt items, covert items defined a latent variable whose meaning was shared between respondents in the sincere and faking condition, and resistant to deliberate distortion. Rasch fit statistics indicated unexpected responses more often for respondents in the faking condition than for those in the sincere condition and, in particular, for the responses to overt items by individuals in the faking condition. More than half of the respondents in the faking condition showed a drift rate (difference between the alexithymia levels estimated on the responses to overt and covert items) significantly larger than that observed in the respondents in the sincere condition.
Collapse
Affiliation(s)
- Giulio Vidotto
- Department of General Psychology, School of Psychology, University of Padova, Padova, Italy
| | - Pasquale Anselmi
- Department of Philosophy, Sociology, Education and Applied Psychology, School of Psychology, University of Padova, Padova, Italy
| | - Luca Filipponi
- Department of Developmental Psychology and Socialization, School of Psychology, University of Padova, Padova, Italy
| | - Marco Tommasi
- Department of Psychological, Humanistic and Territorial Sciences, Università degli Studi “G. d’Annunzio” Chieti-Pescara, Chieti, Italy
| | - Aristide Saggino
- Department of Psychological, Humanistic and Territorial Sciences, Università degli Studi “G. d’Annunzio” Chieti-Pescara, Chieti, Italy
| |
Collapse
|
10
|
Da Dalt L, Anselmi P, Furlan S, Carraro S, Baraldi E, Robusto E, Perilongo G. An evaluation system for postgraduate pediatric residency programs: report of a 3-year experience. Eur J Pediatr 2017; 176:1279-1283. [PMID: 28762071 PMCID: PMC5563329 DOI: 10.1007/s00431-017-2967-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/01/2017] [Revised: 06/06/2017] [Accepted: 07/06/2017] [Indexed: 10/24/2022]
Abstract
UNLABELLED The way a postgraduate medical training program is organized and the capacity of faculty members to function as tutors and to organize effective professional experiences are among the elements that affect the quality of training. An evaluation system designed to target these elements has been implemented within the framework of the Pediatric Residency Program of the University of Padua (Italy). The aim of this report is to describe some aspects of the experience gained in the first 3 years of implementation of the system (2013-2015). Data were collected using four validated questionnaires: the "Resident Assessment Questionnaire", the "Tutor-Assessment Questionnaire", the "Rotation-Assessment Questionnaire", and the "Resident Affairs Committee-Assessment Questionnaire". The response rate was 72% for the "Resident Assessment Questionnaires"; 78% for the "Tutor-/Rotation-Assessment Questionnaires" and 84% for the "Resident Affair Committee-Assessment Questionnaires". The scores collected were validated by psychometric tests. CONCLUSION The high rates of completed questionnaires returned and the psychometric validation of the results collected indicate that the evaluation system reported herein can be effectively implemented. Efforts should be made to refine this system and, more importantly, to document its impact in improving the Pediatric Residency Program. What is known: • The elements that influence the quality of postgraduate training programs and the knowledge, performance, and competences of residents must be regularly assessed. • Comprehensive evaluation systems for postgraduate residency programs are not universally implemented also because quite often common guidelines and rules, well-equipped infrastructures, and financial resources are missing. What is new: • We show the feasibility of implementing an evaluation system that targets some of the key elements of a postgraduate medical training program in Italy, a European country in which the regulations governing training programs and, notably, the evaluation of residents are still being developed.
Collapse
Affiliation(s)
- Liviana Da Dalt
- Pediatric Residency Program, Department of Woman's and Child's Health, University of Padua, Via Giustiniani 3, 35128, Padua, Italy.
| | - Pasquale Anselmi
- 0000 0004 1757 3470grid.5608.bDepartment of Philosophy, Sociology, Educational Studies and Applied Psychology, University of Padua, Padua, Italy
| | - Sara Furlan
- 0000 0004 1757 3470grid.5608.bDepartment of Philosophy, Sociology, Educational Studies and Applied Psychology, University of Padua, Padua, Italy
| | - Silvia Carraro
- 0000 0004 1757 3470grid.5608.bPediatric Residency Program, Department of Woman’s and Child’s Health, University of Padua, Via Giustiniani 3, 35128 Padua, Italy
| | - Eugenio Baraldi
- 0000 0004 1757 3470grid.5608.bPediatric Residency Program, Department of Woman’s and Child’s Health, University of Padua, Via Giustiniani 3, 35128 Padua, Italy
| | - Egidio Robusto
- 0000 0004 1757 3470grid.5608.bDepartment of Philosophy, Sociology, Educational Studies and Applied Psychology, University of Padua, Padua, Italy
| | - Giorgio Perilongo
- 0000 0004 1757 3470grid.5608.bPediatric Residency Program, Department of Woman’s and Child’s Health, University of Padua, Via Giustiniani 3, 35128 Padua, Italy
| |
Collapse
|