1
|
Stull SW, Mogle J, Bertz JW, Burgess-Hull AJ, Panlilio LV, Lanza ST, Preston KL, Epstein DH. Variability in intensively assessed mood: Systematic sources and factor structure in outpatients with opioid use disorder. Psychol Assess 2022; 34:966-977. [PMID: 35980695 PMCID: PMC10066936 DOI: 10.1037/pas0001160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In intensive longitudinal studies using ecological momentary assessment, mood is typically assessed by repeatedly obtaining ratings for a large set of adjectives. Summarizing and analyzing these mood data can be problematic because the reliability and factor structure of such measures have rarely been evaluated in this context, which-unlike cross-sectional studies-captures between- and within-person processes. Our study examined how mood ratings (obtained thrice daily for 8 weeks; n = 306, person moments = 39,321) systematically vary and covary in outpatients receiving medication for opioid use disorder (MOUD). We used generalizability theory to quantify several aspects of reliability, and multilevel confirmatory factor analysis (MCFA) to detect factor structures within and across people. Generalizability analyses showed that the largest proportion of systematic variance across mood items was at the person level, followed by the person-by-day interaction and the (comparatively small) person-by-moment interaction for items reflecting low arousal. The best-fitting MCFA model had a three-factor structure both at the between- and within-person levels: positive mood, negative mood, and low-arousal states (with low arousal considered as either a separate factor or a subfactor of negative mood). We conclude that (a) mood varied more between days than between moments and (b) low arousal may be worth scoring and reporting separately from positive and negative mood states, at least in a MOUD population. Our three-factor structure differs from prior analyses of mood; more work is needed to understand the extent to which it generalizes to other populations. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
Affiliation(s)
- Samuel W. Stull
- Department of Biobehavioral Health, The Pennsylvania State University, University Park, PA, 16802, USA
- Intramural Research Program, National Institute on Drug Abuse, 251 Bayview Blvd., Suite 200, Baltimore, MD, 21224, United States
| | - Jacqueline Mogle
- The Edna Bennett Pierce Prevention Research Center, The Pennsylvania State University, University Park, PA, 16802, USA
| | - Jeremiah W. Bertz
- Intramural Research Program, National Institute on Drug Abuse, 251 Bayview Blvd., Suite 200, Baltimore, MD, 21224, United States
| | - Albert J. Burgess-Hull
- Intramural Research Program, National Institute on Drug Abuse, 251 Bayview Blvd., Suite 200, Baltimore, MD, 21224, United States
| | - Leigh V. Panlilio
- Intramural Research Program, National Institute on Drug Abuse, 251 Bayview Blvd., Suite 200, Baltimore, MD, 21224, United States
| | - Stephanie T. Lanza
- Department of Biobehavioral Health, The Pennsylvania State University, University Park, PA, 16802, USA
- The Edna Bennett Pierce Prevention Research Center, The Pennsylvania State University, University Park, PA, 16802, USA
| | - Kenzie L. Preston
- Intramural Research Program, National Institute on Drug Abuse, 251 Bayview Blvd., Suite 200, Baltimore, MD, 21224, United States
| | - David H. Epstein
- Intramural Research Program, National Institute on Drug Abuse, 251 Bayview Blvd., Suite 200, Baltimore, MD, 21224, United States
| |
Collapse
|
2
|
Aljaberi MA, Lee KH, Alareqe NA, Qasem MA, Alsalahi A, Abdallah AM, Noman S, Al-Tammemi AB, Mohamed Ibrahim MI, Lin CY. Rasch Modeling and Multilevel Confirmatory Factor Analysis for the Usability of the Impact of Event Scale-Revised (IES-R) during the COVID-19 Pandemic. Healthcare (Basel) 2022; 10:1858. [PMID: 36292305 PMCID: PMC9602035 DOI: 10.3390/healthcare10101858] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 09/20/2022] [Accepted: 09/21/2022] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND Several instruments are currently used to assess Coronavirus Disease 2019 (COVID-19) -induced psychological distress, including the 22-item Impact of Event Scale-Revised (IES-R). The IES-R is a self-administered scale used to assess post-traumatic stress disorder (PTSD). The current study aimed to examine the construct validity of the IES-R, based on the Rasch model, with COVID-19-related data, as well as to test the multilevel construct validity of the IES-R within and among countries during the pandemic crisis. METHODS A multi-country web-based cross-sectional survey was conducted utilizing the 22-item IES-R. A total of 1020 participants enrolled in our survey, of whom 999 were included in the analyses. Data were analyzed using Rasch modeling and multilevel confirmatory factor analysis (MCFA). RESULTS The Rasch modeling results of the IES-R demonstrated that the IES-R is a satisfactory instrument with the five-point Likert scale, asserting that its 22 items are significant contributors to assessing PTSD as a unidimensional construct covered by the items of the IES-R. The MCFA confirmed that the 22-item IES-R, with its three factors, including intrusion, avoidance, and hyperarousal, demonstrates adequate construct validity at the within- and among-country levels. However, the results of the Akaike information criterion (AIC) model determined that the 16-item IES-R is better than the 22-item IES-R. CONCLUSION The results suggested that the 22-item IES-R is a reliable screening instrument for measuring PTSD related to the COVID-19 pandemic, and can be utilized to provide timely psychological health support, when needed, based on the screening results.
Collapse
Affiliation(s)
- Musheer A. Aljaberi
- Faculty of Medicine and Health Sciences, Taiz University, Taiz 6803, Yemen
- Department of Community Health, Faculty of Medicine and Health Sciences, Universiti Putra Malaysia, Serdang 43300, Malaysia
- Faculty of Nursing and Applied Sciences, Lincoln University College, Petaling Jaya 47301, Malaysia
| | - Kuo-Hsin Lee
- Department of Emergency Medicine, E-Da Hospital, I-Shou University, No. 1, Yi-Da Road, Yanchao District, Kaohsiung City 824, Taiwan
- School of Medicine, I-Shou University, No. 8, Yi-Da Road, Jiao-Su Village, Yan-Chao District, Kaohsiung City 824, Taiwan
| | - Naser A. Alareqe
- Department of Educational Psychology, Faculty of Education, Taiz University, Taiz 6803, Yemen
| | - Mousa A. Qasem
- Department of Pharmaceutical Technology, Faculty of Pharmacy, University of Malaya, Kuala Lumpur 50603, Malaysia
| | - Abdulsamad Alsalahi
- Department of Pharmacology, Faculty of Pharmacy, Sana’a University, Mazbah District, Sana’a 1247, Yemen
| | - Atiyeh M. Abdallah
- Department of Biomedical Sciences, College of Health Sciences, QU-Health, Qatar University, Doha 2713, Qatar
| | - Sarah Noman
- Department of Community Health, Faculty of Medicine and Health Sciences, Universiti Putra Malaysia, Serdang 43300, Malaysia
| | - Ala’a B. Al-Tammemi
- Migration Health Division, International Organization for Migration (IOM), Amman 11953, Jordan
| | | | - Chung-Ying Lin
- Institute of Allied Health Sciences, College of Medicine, National Cheng Kung University, Tainan 701, Taiwan
| |
Collapse
|
3
|
Sideridis GD, Tsaousis I, Al-Sadaawi A. Assessing Construct Validity in Math Achievement: An Application of Multilevel Structural Equation Modeling (MSEM). Front Psychol 2018; 9:1451. [PMID: 30233437 PMCID: PMC6134196 DOI: 10.3389/fpsyg.2018.01451] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2018] [Accepted: 07/24/2018] [Indexed: 11/13/2022] Open
Abstract
The purpose of the present study was to model math achievement at both the person and university levels of the analyses in order to understand the optimal factor structure of math competency. Data involved 2,881 students who took a national mathematics examination as part of their entry at the university public system in Saudi Arabia. Four factors from the National math examination comprised the math achievement measure, namely, numbers and operations, algebra and analysis, geometry and measurement, and, statistics and probabilities. Data were analyzed using the aggregate method and by use of Multilevel Structural Equation Modeling (MSEM). Results indicated that both a unidimensional and a 4-factor correlated model fitted the data equally well using aggregate data, where for reasons of parsimony the unidimensional model was the preferred choice with these data. When modeling data including clustering, results pointed to alternative factor structures at the person and university levels. Thus, a unidimensional model provided the best fit at the University level, whereas a four-factor correlated model was most descriptive for person level data. The optimal simple structure was evaluated using the Ryu and West (2009) methodology for partially saturating the MSEM model and also met criteria for discriminant validation as described in Gorsuch (1983). Furthermore, a university level variable, namely the year of establishment, pointed to the superiority of older institutions with regard to math achievement. It is concluded that ignoring a multilevel structure in the data may result in erroneous conclusions with regard to the optimal factor structure and the tests of structural models following that.
Collapse
Affiliation(s)
- Georgios D Sideridis
- Harvard Medical School, Boston Children's Hospital, Boston, MA, United States.,Department of Primary Education, National and Kapodistrian University of Athens, Athens, Greece
| | | | - Abdullah Al-Sadaawi
- Department of Psychology, King Saud University, Riyadh, Saudi Arabia.,National Center for Assessment in Higher Education, Riyadh, Saudi Arabia
| |
Collapse
|
4
|
Guenole N. Reconsidering Cluster Bias in Multilevel Data: A Monte Carlo Comparison of Free and Constrained Baseline Approaches. Front Psychol 2018; 9:255. [PMID: 29551985 PMCID: PMC5841353 DOI: 10.3389/fpsyg.2018.00255] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2017] [Accepted: 02/15/2018] [Indexed: 11/13/2022] Open
Abstract
The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository.
Collapse
Affiliation(s)
- Nigel Guenole
- Goldsmiths, University of London, London, United Kingdom
| |
Collapse
|
5
|
Abstract
Data often have a nested, multilevel structure, for example when data are collected from children in classrooms. This kind of data complicate the evaluation of reliability and measurement invariance, because several properties can be evaluated at both the individual level and the cluster level, as well as across levels. For example, cross-level invariance implies equal factor loadings across levels, which is needed to give latent variables at the two levels a similar interpretation. Reliability at a specific level refers to the ratio of true score variance over total variance at that level. This paper aims to shine light on the relation between reliability, cross-level invariance, and strong factorial invariance across clusters in multilevel data. Specifically, we will illustrate how strong factorial invariance across clusters implies cross-level invariance and perfect reliability at the between level in multilevel factor models.
Collapse
Affiliation(s)
- Suzanne Jak
- Research Institute of Child Development and Education, University of Amsterdam, Amsterdam, Netherlands
| | - Terrence D Jorgensen
- Research Institute of Child Development and Education, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
6
|
Wu JY, Lin JJH, Nian MW, Hsiao YC. A Solution to Modeling Multilevel Confirmatory Factor Analysis with Data Obtained from Complex Survey Sampling to Avoid Conflated Parameter Estimates. Front Psychol 2017; 8:1464. [PMID: 29018369 PMCID: PMC5614970 DOI: 10.3389/fpsyg.2017.01464] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2017] [Accepted: 08/15/2017] [Indexed: 11/13/2022] Open
Abstract
The issue of equality in the between-and within-level structures in Multilevel Confirmatory Factor Analysis (MCFA) models has been influential for obtaining unbiased parameter estimates and statistical inferences. A commonly seen condition is the inequality of factor loadings under equal level-varying structures. With mathematical investigation and Monte Carlo simulation, this study compared the robustness of five statistical models including two model-based (a true and a mis-specified models), one design-based, and two maximum models (two models where the full rank of variance-covariance matrix is estimated in between level and within level, respectively) in analyzing complex survey measurement data with level-varying factor loadings. The empirical data of 120 3rd graders' (from 40 classrooms) perceived Harter competence scale were modeled using MCFA and the parameter estimates were used as true parameters to perform the Monte Carlo simulation study. Results showed maximum models was robust to unequal factor loadings while the design-based and the miss-specified model-based approaches produced conflated results and spurious statistical inferences. We recommend the use of maximum models if researchers have limited information about the pattern of factor loadings and measurement structures. Measurement models are key components of Structural Equation Modeling (SEM); therefore, the findings can be generalized to multilevel SEM and CFA models. Mplus codes are provided for maximum models and other analytical models.
Collapse
Affiliation(s)
- Jiun-Yu Wu
- Institute of Education, National Chiao Tung UniversityHsinchu, Taiwan
| | - John J. H. Lin
- Office of Institutional Research, National Central UniversityTaoyuan, Taiwan
| | - Mei-Wen Nian
- Institute of Education, National Chiao Tung UniversityHsinchu, Taiwan
| | - Yi-Cheng Hsiao
- Institute of Education, National Chiao Tung UniversityHsinchu, Taiwan
| |
Collapse
|
7
|
Abstract
We provide reporting guidelines for multilevel factor analysis (MFA) and use these guidelines to systematically review 72 MFA applications in journals across a range of disciplines (e.g., education, health/nursing, management, and psychology) published between 1994 and 2014. Results are organized in terms of the (a) characteristics of the MFA application (e.g., construct measured), (b) purpose (e.g., measurement validation), (c) data source (e.g., number of cases at Level 1 and Level 2), (d) statistical approach (e.g., maximum likelihood), and (e) results reported (e.g., intraclass correlations for indicators and latent variables, standardized factor loadings, fit indices). Results from this review have implications for applied researchers interested in expanding their approaches to psychometric analyses and construct validation within a multilevel framework and for methodologists using Monte Carlo methods to explore technical and methodological issues grounded in realistic research design conditions.
Collapse
|
8
|
Can S, van de Schoot R, Hox J. Collinear Latent Variables in Multilevel Confirmatory Factor Analysis: A Comparison of Maximum Likelihood and Bayesian Estimations. Educ Psychol Meas 2015; 75:406-427. [PMID: 29795827 PMCID: PMC5965642 DOI: 10.1177/0013164414547959] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Because variables may be correlated in the social and behavioral sciences, multicollinearity might be problematic. This study investigates the effect of collinearity manipulated in within and between levels of a two-level confirmatory factor analysis by Monte Carlo simulation. Furthermore, the influence of the size of the intraclass correlation coefficient (ICC) and estimation method; maximum likelihood estimation with robust chi-squares and standard errors and Bayesian estimation, on the convergence rate are investigated. The other variables of interest were rate of inadmissible solutions and the relative parameter and standard error bias on the between level. The results showed that inadmissible solutions were obtained when there was between level collinearity and the estimation method was maximum likelihood. In the within level multicollinearity condition, all of the solutions were admissible but the bias values were higher compared with the between level collinearity condition. Bayesian estimation appeared to be robust in obtaining admissible parameters but the relative bias was higher than for maximum likelihood estimation. Finally, as expected, high ICC produced less biased results compared to medium ICC conditions.
Collapse
Affiliation(s)
- Seda Can
- İzmir University of Economics, İzmir, Turkey
| | | | - Joop Hox
- Utrecht University, Utrecht, Netherlands
| |
Collapse
|