1
|
A Psychometric Network Analysis of CHC Intelligence Measures: Implications for Research, Theory, and Interpretation of Broad CHC Scores "Beyond g". J Intell 2023; 11:jintelligence11010019. [PMID: 36662149 PMCID: PMC9865475 DOI: 10.3390/jintelligence11010019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 01/07/2023] [Accepted: 01/10/2023] [Indexed: 01/18/2023] Open
Abstract
For over a century, the structure of intelligence has been dominated by factor analytic methods that presume tests are indicators of latent entities (e.g., general intelligence or g). Recently, psychometric network methods and theories (e.g., process overlap theory; dynamic mutualism) have provided alternatives to g-centric factor models. However, few studies have investigated contemporary cognitive measures using network methods. We apply a Gaussian graphical network model to the age 9-19 standardization sample of the Woodcock-Johnson Tests of Cognitive Ability-Fourth Edition. Results support the primary broad abilities from the Cattell-Horn-Carroll (CHC) theory and suggest that the working memory-attentional control complex may be central to understanding a CHC network model of intelligence. Supplementary multidimensional scaling analyses indicate the existence of possible higher-order dimensions (PPIK; triadic theory; System I-II cognitive processing) as well as separate learning and retrieval aspects of long-term memory. Overall, the network approach offers a viable alternative to factor models with a g-centric bias (i.e., bifactor models) that have led to erroneous conclusions regarding the utility of broad CHC scores in test interpretation beyond the full-scale IQ, g.
Collapse
|
2
|
Decker SL. Don’t Use a Bifactor Model Unless You Believe the True Structure Is Bifactor. JOURNAL OF PSYCHOEDUCATIONAL ASSESSMENT 2020. [DOI: 10.1177/0734282920977718] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
The current article provides a response to concerns raised by Dombrowski, McGill, Canivez, Watkins, & Beaujean (2020) regarding the methodological confounds identified by Decker, Bridges, Luedke, and Eason (2020) for using a bifactor (BF) model and Schmid–Leiman (SL) procedure in previous studies supporting a general factor of intelligence (i.e., “g”). While Dombrowski et al. (2020) raised important theoretical and practical issues, the theoretical justification for using a BF model and SL procedure to identify cognitive dimensions remain unaddressed, as well as significant concerns for using these statistical methods as the basis for informing the use of cognitive tests in clinical applications.
Collapse
Affiliation(s)
- Scott L. Decker
- Department of Psychology, University of South Carolina, SC, USA
| |
Collapse
|
3
|
Dombrowski SC, McGill RJ, Canivez GL, Watkins MW, Beaujean AA. Factor Analysis and Variance Partitioning in Intelligence Test Research: Clarifying Misconceptions. JOURNAL OF PSYCHOEDUCATIONAL ASSESSMENT 2020. [DOI: 10.1177/0734282920961952] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This article addresses conceptual and methodological shortcomings regarding conducting and interpreting intelligence test factor analytic research that appeared in the Decker, S. L., Bridges, R. M., Luedke, J. C., & Eason, M. J. (2020). Dimensional evaluation of cognitive measures: Methodological confounds and theoretical concerns. Journal of Psychoeducational Assessment. Advance online publication article.
Collapse
|
4
|
Decker SL, Bridges RM, Luedke JC, Eason MJ. Dimensional Evaluation of Cognitive Measures: Methodological Confounds and Theoretical Concerns. JOURNAL OF PSYCHOEDUCATIONAL ASSESSMENT 2020. [DOI: 10.1177/0734282920940879] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The current study provides a methodological review of studies supporting a general factor of intelligence as the primary model for contemporary measures of cognitive abilities. A further evaluation is provided by an empirical evaluation that compares statistical estimates using different approaches in a large sample of children (ages 9–13 years, N = 780) administered a comprehensive battery of cognitive measures. Results from this study demonstrate the ramifications of using the bifactor and Schmid–Leiman (BF/SL) technique and suggest that using BF/SL methods limit interpretation of cognitive abilities to only a general factor. The inadvertent use of BF/SL methods is demonstrated to impact both model dimensionality and variance estimates for specific measures. As demonstrated in this study, conclusions from both exploratory and confirmatory studies using BF/SL methods are significantly questioned, especially for studies with a questionable theoretical basis. Guidelines for the interpretation of cognitive test scores in applied practice are discussed.
Collapse
|
5
|
Clements CC, Watkins MW, Schultz RT, Yerys BE. Does the Factor Structure of IQ Differ Between the Differential Ability Scales (DAS-II) Normative Sample and Autistic Children? Autism Res 2020; 13:1184-1194. [PMID: 32112626 DOI: 10.1002/aur.2285] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Revised: 01/17/2020] [Accepted: 02/11/2020] [Indexed: 11/12/2022]
Abstract
The Differential Abilities Scales, 2nd edition (DAS-II) is frequently used to assess intelligence in autism spectrum disorder (ASD). However, it remains unknown whether the DAS-II measurement model (e.g., factor structure, loadings), which was developed on a normative sample, holds for the autistic population or requires alternative score interpretations. We obtained DAS-II data from 1,316 autistic individuals in the Simons Simplex Consortium and 2,400 individuals in the normative data set. We combined ASD and normative data sets for multigroup confirmatory factor analyses to assess different levels of measurement invariance, or how well the same measurement model fit both data sets: "weak" or metric, "strong" or scalar, and partial scalar if full scalar was not achieved. A weak invariance model showed excellent fit (Confirmatory Fit Index [CFI] > 0.995, Tucker Lewis Index [TLI] > 0.995, root mean square error of approximation [RMSEA] < 0.025), but a strong invariance model demonstrated a significant deterioration in fit during permutation testing (all p's<0.001), suggesting measurement bias, meaning systematic error when assessing autistic children. Fit improved significantly, and partial scalar invariance was achieved when either of the two spatial subtest (Recall of Designs or Pattern Construction) intercepts was permitted to vary between the ASD and normative groups, pinpointing these subtests as the source of bias. The DAS-II appears to measure verbal and nonverbal-but not spatial-intelligence in autistic children similarly as in normative sample children. These results may be driven by Pattern Construction, which shows higher scores than other subtests in the ASD sample. Clinicians assessing autistic children with the DAS-II should interpret verbal and nonverbal reasoning composite scores over the spatial score or General Composite Ability. Autism Res 2020, 13: 1184-1194. © 2020 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: The Differential Abilities Scales, 2nd edition (DAS-II) is a popular intelligence quotient (IQ) test for assessing children with autism. This article shows that the DAS-II spatial standardized scores should be interpreted with caution because they hold a different meaning for autistic children. Verbal and nonverbal reasoning scores appear valid and to hold the same meaning for those with and without autism spectrum disorder.
Collapse
Affiliation(s)
- Caitlin C Clements
- The Children's Hospital of Philadelphia, Center for Autism Research, Philadelphia, Pennsylvania.,Psychology Department, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Marley W Watkins
- Department of Educational Psychology, Baylor University, Waco, Texas
| | - Robert T Schultz
- The Children's Hospital of Philadelphia, Center for Autism Research, Philadelphia, Pennsylvania.,Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Benjamin E Yerys
- The Children's Hospital of Philadelphia, Center for Autism Research, Philadelphia, Pennsylvania
| |
Collapse
|
6
|
Dombrowski SC, McGill RJ, Morgan GB. Monte Carlo Modeling of Contemporary Intelligence Test (IQ) Factor Structure: Implications for IQ Assessment, Interpretation, and Theory. Assessment 2019; 28:977-993. [PMID: 31431055 DOI: 10.1177/1073191119869828] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Researchers continue to debate the constructs measured by commercial ability tests. Factor analytic investigations of these measures have been used to develop and refine widely adopted psychometric theories of intelligence particularly the Cattell-Horn-Carroll (CHC) model. Even so, this linkage may be problematic as many of these investigations examine a particular instrument in isolation and CHC model specification across tests and research teams has not been consistent. To address these concerns, the present study used Monte Carlo resampling to investigate the latent structure of four of the most widely used intelligence tests for children and adolescents. The results located the approximate existence of the publisher posited CHC theoretical group factors in the Differential Abilities Scales-Second edition and the Kaufman Assessment Battery for Children-Second edition but not in the Wechsler Intelligence Scale for Children-Fifth edition or the Woodcock-Johnson IV Tests of Cognitive Abilities. Instead, the results supported alternative conceptualizations from independent factor analytic research. Additionally, whereas a bifactor model produced superior fit indices in two instruments (Wechsler Intelligence Scale for Children-Fifth edition and Woodcock-Johnson IV Tests of Cognitive Abilities), a higher order structure was found to be superior in the Kaufman Assessment Battery for Children-Second edition and the Differential Abilities Scales-Second edition. Regardless of the model employed, the general factor captured a significant portion of each instrument's variance. Implications for IQ test assessment, interpretation, and theory are discussed.
Collapse
|
7
|
Canivez GL, McGill RJ, Dombrowski SC, Watkins MW, Pritchard AE, Jacobson LA. Construct Validity of the WISC-V in Clinical Cases: Exploratory and Confirmatory Factor Analyses of the 10 Primary Subtests. Assessment 2018; 27:274-296. [PMID: 30516059 DOI: 10.1177/1073191118811609] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Independent exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) research with the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V) standardization sample has failed to provide support for the five group factors proposed by the publisher, but there have been no independent examinations of the WISC-V structure among clinical samples. The present study examined the latent structure of the 10 WISC-V primary subtests with a large (N = 2,512), bifurcated clinical sample (EFA, n = 1,256; CFA, n = 1,256). EFA did not support five factors as there were no salient subtest factor pattern coefficients on the fifth extracted factor. EFA indicated a four-factor model resembling the WISC-IV with a dominant general factor. A bifactor model with four group factors was supported by CFA as suggested by EFA. Variance estimates from both EFA and CFA found that the general intelligence factor dominated subtest variance and omega-hierarchical coefficients supported interpretation of the general intelligence factor. In both EFA and CFA, group factors explained small portions of common variance and produced low omega-hierarchical subscale coefficients, indicating that the group factors were of poor interpretive value.
Collapse
Affiliation(s)
| | | | | | | | | | - Lisa A Jacobson
- Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
8
|
Canivez GL, Dombrowski SC, Watkins MW. Factor structure of the WISC-V in four standardization age groups: Exploratory and hierarchical factor analyses with the 16 primary and secondary subtests. PSYCHOLOGY IN THE SCHOOLS 2018. [DOI: 10.1002/pits.22138] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|