1
|
Unsworth N, Miller AL, Strayer DL. Individual differences in attention control: A meta-analysis and re-analysis of latent variable studies. Psychon Bull Rev 2024; 31:2487-2533. [PMID: 38769271 DOI: 10.3758/s13423-024-02516-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/11/2024] [Indexed: 05/22/2024]
Abstract
A meta-analysis and re-analysis of prior latent variable studies was conducted in order to assess whether there is evidence for individual differences in broad attention control abilities. Data from 90 independent samples and over 23,000 participants suggested that most (84.4%) prior studies find evidence for a coherent attention control factor with average factor loadings of .51. This latent attention control factor was related to other cognitive ability factors including working memory, shifting, fluid intelligence, long-term memory, reading comprehension, and processing speed, as well as to self-reports of task-unrelated thoughts and task specific motivation. Further re-analyses and meta-analyses suggest that the results remained largely unchanged when considering various possible measurement issues. Examining the factor structure of attention control suggested evidence for sub-components of attention control (restraining, constraining and sustaining attention) which could be accounted for a by a higher-order factor. Additional re-analyses suggested that attention control represents a broad ability within models of cognitive abilities. Overall, these results provide evidence for attention control abilities as an important individual differences construct.
Collapse
Affiliation(s)
- Nash Unsworth
- Department of Psychology, University of Oregon, Eugene, OR, 97403, USA.
| | - Ashley L Miller
- Department of Psychology, University of California, Los Angeles, CA, USA
| | - Deanna L Strayer
- Department of Psychology, University of Oregon, Eugene, OR, 97403, USA
| |
Collapse
|
2
|
Löffler C, Frischkorn GT, Hagemann D, Sadus K, Schubert AL. The common factor of executive functions measures nothing but speed of information uptake. PSYCHOLOGICAL RESEARCH 2024; 88:1092-1114. [PMID: 38372769 PMCID: PMC11143038 DOI: 10.1007/s00426-023-01924-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 12/27/2023] [Indexed: 02/20/2024]
Abstract
There is an ongoing debate about the unity and diversity of executive functions and their relationship with other cognitive abilities such as processing speed, working memory capacity, and intelligence. Specifically, the initially proposed unity and diversity of executive functions is challenged by discussions about (1) the factorial structure of executive functions and (2) unfavorable psychometric properties of measures of executive functions. The present study addressed two methodological limitations of previous work that may explain conflicting results: The inconsistent use of (a) accuracy-based vs. reaction time-based indicators and (b) average performance vs. difference scores. In a sample of 148 participants who completed a battery of executive function tasks, we tried to replicate the three-factor model of the three commonly distinguished executive functions shifting, updating, and inhibition by adopting data-analytical choices of previous work. After addressing the identified methodological limitations using drift-diffusion modeling, we only found one common factor of executive functions that was fully accounted for by individual differences in the speed of information uptake. No variance specific to executive functions remained. Our results suggest that individual differences common to all executive function tasks measure nothing more than individual differences in the speed of information uptake. We therefore suggest refraining from using typical executive function tasks to study substantial research questions, as these tasks are not valid for measuring individual differences in executive functions.
Collapse
Affiliation(s)
- Christoph Löffler
- Institute of Psychology, Heidelberg University, Heidelberg, Germany.
- Department of Psychology, University of Mainz, Mainz, Germany.
| | | | - Dirk Hagemann
- Institute of Psychology, Heidelberg University, Heidelberg, Germany
| | - Kathrin Sadus
- Institute of Psychology, Heidelberg University, Heidelberg, Germany
| | | |
Collapse
|
3
|
Uttal DH, McKee K, Simms N, Hegarty M, Newcombe NS. How Can We Best Assess Spatial Skills? Practical and Conceptual Challenges. J Intell 2024; 12:8. [PMID: 38248906 PMCID: PMC10816932 DOI: 10.3390/jintelligence12010008] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/26/2023] [Accepted: 12/21/2023] [Indexed: 01/23/2024] Open
Abstract
Spatial thinking skills are associated with performance, persistence, and achievement in science, technology, engineering, and mathematics (STEM) school subjects. Because STEM knowledge and skills are integral to developing a well-trained workforce within and beyond STEM, spatial skills have become a major focus of cognitive, developmental, and educational research. However, these efforts are greatly hampered by the current lack of access to reliable, valid, and well-normed spatial tests. Although there are hundreds of spatial tests, they are often hard to access and use, and information about their psychometric properties is frequently lacking. Additional problems include (1) substantial disagreement about what different spatial tests measure-even two tests with similar names may measure very different constructs; (2) the inability to measure some STEM-relevant spatial skills by any existing tests; and (3) many tests only being available for specific age groups. The first part of this report delineates these problems, as documented in a series of structured and open-ended interviews and surveys with colleagues. The second part outlines a roadmap for addressing the problems. We present possibilities for developing shared testing systems that would allow researchers to test many participants through the internet. We discuss technological innovations, such as virtual reality, which could facilitate the testing of navigation and other spatial skills. Developing a bank of testing resources will empower researchers and educators to explore and support spatial thinking in their disciplines, as well as drive the development of a comprehensive and coherent theoretical understanding of spatial thinking.
Collapse
Affiliation(s)
- David H. Uttal
- Department of Psychology, Northwestern University, Evanston, IL 60208, USA
| | - Kiley McKee
- Department of Psychology, Northwestern University, Evanston, IL 60208, USA
| | - Nina Simms
- Spatial Intelligence and Learning Center, Northwestern University, Evanston, IL 60208, USA
| | - Mary Hegarty
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA 93106, USA
| | - Nora S. Newcombe
- Department of Psychology and Neuroscience, Temple University, Philadelphia, PA 19122, USA
| |
Collapse
|
4
|
He C, Boone AP, Hegarty M. Measuring configural spatial knowledge: Individual differences in correlations between pointing and shortcutting. Psychon Bull Rev 2023; 30:1802-1813. [PMID: 36932307 PMCID: PMC10716069 DOI: 10.3758/s13423-023-02266-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/01/2023] [Indexed: 03/19/2023]
Abstract
People use environmental knowledge to maintain a sense of direction in daily life. This knowledge is typically measured by having people point to unseen locations (judgments of relative direction) or navigate efficiently in the environment (shortcutting). Some people can estimate directions precisely, while others point randomly. Similarly, some people take shortcuts not experienced during learning, while others mainly follow learned paths. Notably, few studies have directly tested the correlation between pointing and shortcutting performance. We compared pointing and shortcutting in two experiments, one using desktop virtual reality (VR) (N = 57) and one using immersive VR (N = 48). Participants learned a new environment by following a fixed route and were then asked to point to unseen locations and navigate to targets by the shortest path. Participants' performance was clustered into two groups using K-means clustering. One (lower ability) group pointed randomly and showed low internal consistency across trials in pointing, but were able to find efficient routes, and their pointing and efficiency scores were not correlated. The others (higher ability) pointed precisely, navigated by efficient routes, and their pointing and efficiency scores were correlated. These results suggest that with the same egocentric learning experience, the correlation between pointing and shortcutting depends on participants' learning ability, and internal consistency and discriminating power of the measures. Inconsistency and limited discriminating power can lead to low correlations and mask factors driving human variation. Psychometric properties, largely under-reported in spatial cognition, can advance our understanding of individual differences and cognitive processes for complex spatial tasks.
Collapse
Affiliation(s)
| | | | - Mary Hegarty
- University of California, Santa Barbara, CA, USA
| |
Collapse
|
5
|
Intelligence Process vs. Content and Academic Performance: A Trip through a House of Mirrors. J Intell 2022; 10:jintelligence10040128. [PMID: 36547515 PMCID: PMC9782628 DOI: 10.3390/jintelligence10040128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 12/07/2022] [Accepted: 12/09/2022] [Indexed: 12/23/2022] Open
Abstract
The main purpose of modern intelligence tests has been to predict individual differences in academic performance, first of children, then adolescents, and later extending to adults. From the earliest Binet-Simon scales to current times, most one-on-one omnibus intelligence assessments include both process subtests (e.g., memory, reasoning) and content subtests (e.g., vocabulary, information). As somewhat parallel developments, intelligence theorists have argued about the primacy of the process components or the content components reflecting intelligence, with many modern researchers proposing that process constructs like working memory are the fundamental determinant of individual differences in intelligence. To address whether there is an adequate basis for re-configuring intelligence assessments from content or mixed content and process measures to all-process measures, the question to be answered in this paper is whether intellectual process assessments are more or less valid predictors of academic success, in comparison to content measures. A brief review of the history of intelligence assessment is provided with respect to these issues, and a number of problems and limitations of process measures is discussed. In the final analysis, there is insufficient justification for using process-only measures to the exclusion of content measures, and the limited data available point to the idea that content-dominated measures are more highly predictive of academic success than are process measures.
Collapse
|
6
|
Goecke B, Staab M, Schittenhelm C, Wilhelm O. Stop Worrying about Multiple-Choice: Fact Knowledge Does Not Change with Response Format. J Intell 2022; 10:102. [PMID: 36412782 PMCID: PMC9680349 DOI: 10.3390/jintelligence10040102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Revised: 10/11/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022] Open
Abstract
Declarative fact knowledge is a key component of crystallized intelligence. It is typically measured with multiple-choice (MC) items. Other response formats, such as open-ended formats are less frequently used, although these formats might be superior for measuring crystallized intelligence. Whereas MC formats presumably only require recognizing the correct response to a question, open-ended formats supposedly require cognitive processes such as searching for, retrieving, and actively deciding on a response from long-term memory. If the methods of inquiry alter the cognitive processes involved, mean-changes between methods for assessing declarative knowledge should come along with changes in the covariance structure. We tested these assumptions in two online studies administering declarative knowledge items in different response formats (MC, open-ended, and open-ended with cues). Item difficulty clearly increases in the open-ended methods although effects in logistic regression models vary slightly across items. Importantly, latent variable analyses suggest that the method of inquiry does not affect what is measured with different response formats. These findings clearly endorse the position that crystallized intelligence does not change as a function of the response format.
Collapse
Affiliation(s)
- Benjamin Goecke
- Institute for Psychology and Pedagogy, Ulm University, Albert-Einstein-Allee 47, 89081 Ulm, Germany
| | | | | | | |
Collapse
|
7
|
Frischkorn GT, Wilhelm O, Oberauer K. Process-oriented intelligence research: A review from the cognitive perspective. INTELLIGENCE 2022. [DOI: 10.1016/j.intell.2022.101681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
8
|
Draheim C, Pak R, Draheim AA, Engle RW. The role of attention control in complex real-world tasks. Psychon Bull Rev 2022; 29:1143-1197. [PMID: 35167106 PMCID: PMC8853083 DOI: 10.3758/s13423-021-02052-2] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/14/2021] [Indexed: 11/15/2022]
Abstract
Working memory capacity is an important psychological construct, and many real-world phenomena are strongly associated with individual differences in working memory functioning. Although working memory and attention are intertwined, several studies have recently shown that individual differences in the general ability to control attention is more strongly predictive of human behavior than working memory capacity. In this review, we argue that researchers would therefore generally be better suited to studying the role of attention control rather than memory-based abilities in explaining real-world behavior and performance in humans. The review begins with a discussion of relevant literature on the nature and measurement of both working memory capacity and attention control, including recent developments in the study of individual differences of attention control. We then selectively review existing literature on the role of both working memory and attention in various applied settings and explain, in each case, why a switch in emphasis to attention control is warranted. Topics covered include psychological testing, cognitive training, education, sports, police decision-making, human factors, and disorders within clinical psychology. The review concludes with general recommendations and best practices for researchers interested in conducting studies of individual differences in attention control.
Collapse
Affiliation(s)
- Christopher Draheim
- Department of Psychology, Lawrence University, Appleton, WI, USA.
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA.
| | - Richard Pak
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Amanda A Draheim
- Department of Psychology, Lawrence University, Appleton, WI, USA
| | - Randall W Engle
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|
9
|
Frischkorn GT, von Bastian CC. In Search of the Executive Cognitive Processes Proposed by Process-Overlap Theory. J Intell 2021; 9:43. [PMID: 34449666 PMCID: PMC8395920 DOI: 10.3390/jintelligence9030043] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 07/23/2021] [Accepted: 08/10/2021] [Indexed: 12/29/2022] Open
Abstract
Process-Overlap Theory (POT) suggests that measures of cognitive abilities sample from sets of independent cognitive processes. These cognitive processes can be separated into domain-general executive processes, sampled by the majority of cognitive ability measures, and domain-specific processes, sampled only by measures within a certain domain. According to POT, fluid intelligence measures are related because different tests sample similar domain-general executive cognitive processes to some extent. Re-analyzing data from a study by De Simoni and von Bastian (2018), we assessed domain-general variance from executive processing tasks measuring inhibition, shifting, and efficiency of removal from working memory, as well as examined their relation to a domain-general factor extracted from fluid intelligence measures. The results showed that domain-general factors reflecting general processing speed were moderately and negatively correlated with the domain-general fluid intelligence factor (r = -.17--.36). However, domain-general factors isolating variance specific to inhibition, shifting, and removal showed only small and inconsistent correlations with the domain-general fluid intelligence factor (r = .02--.22). These findings suggest that (1) executive processing tasks sample only few domain-general executive processes also sampled by fluid intelligence measures, as well as (2) that domain-general speed of processing contributes more strongly to individual differences in fluid intelligence than do domain-general executive processes.
Collapse
Affiliation(s)
- Gidon T. Frischkorn
- Department of Psychology, University of Zurich, Binzmuehlestrasse 14, 8050 Zurich, Switzerland
| | | |
Collapse
|
10
|
Cretenoud AF, Barakat A, Milliet A, Choung OH, Bertamini M, Constantin C, Herzog MH. How do visual skills relate to action video game performance? J Vis 2021; 21:10. [PMID: 34269794 PMCID: PMC8297421 DOI: 10.1167/jov.21.7.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
It has been claimed that video gamers possess increased perceptual and cognitive skills compared to non-video gamers. Here, we examined to which extent gaming performance in CS:GO (Counter-Strike: Global Offensive) correlates with visual performance. We tested 94 players ranging from beginners to experts with a battery of visual paradigms, such as visual acuity and contrast detection. In addition, we assessed performance in specific gaming skills, such as shooting and tracking, and administered personality traits. All measures together explained about 70% of the variance of the players’ rank. In particular, regression models showed that a few visual abilities, such as visual acuity in the periphery and the susceptibility to the Honeycomb illusion, were strongly associated with the players’ rank. Although the causality of the effect remains unknown, our results show that high-rank players perform better in certain visual skills compared to low-rank players.
Collapse
Affiliation(s)
- Aline F Cretenoud
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.,
| | - Arthur Barakat
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.,Laboratory of Behavioral Genetics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.,Logitech Europe S.A., Innovation Park EPFL, Lausanne, Switzerland.,
| | - Alain Milliet
- Logitech Europe S.A., Innovation Park EPFL, Lausanne, Switzerland.,
| | - Oh-Hyeon Choung
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.,
| | - Marco Bertamini
- Department of Psychological Sciences, University of Liverpool, Liverpool, UK.,Department of General Psychology, University of Padova, Padova, Italy.,
| | | | - Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.,
| |
Collapse
|
11
|
|
12
|
Hambrick DZ, Macnamara BN, Oswald FL. Is the Deliberate Practice View Defensible? A Review of Evidence and Discussion of Issues. Front Psychol 2020; 11:1134. [PMID: 33013494 PMCID: PMC7461852 DOI: 10.3389/fpsyg.2020.01134] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2019] [Accepted: 05/04/2020] [Indexed: 11/13/2022] Open
Abstract
The question of what explains individual differences in expertise within complex domains such as music, games, sports, science, and medicine is currently a major topic of interest in a diverse range of fields, including psychology, education, and sports science, to name just a few. Ericsson and colleagues' deliberate practice view is a highly influential perspective in the literature on expertise and expert performance-but is it viable as a testable scientific theory? Here, reviewing more than 25 years of Ericsson and colleagues' writings, we document critical inconsistencies in the definition of deliberate practice, along with apparent shifts in the standard for evidence concerning deliberate practice. We also consider the impact of these issues on progress in the field of expertise, focusing on the empirical testability and falsifiability of the deliberate practice view. We then discuss a multifactorial perspective on expertise, and how open science practices can accelerate progress in research guided by this perspective.
Collapse
Affiliation(s)
- David Z. Hambrick
- Department of Psychology, Michigan State University, East Lansing, MI, United States
| | - Brooke N. Macnamara
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH, United States
| | - Frederick L. Oswald
- Department of Psychological Sciences, Rice University, Houston, TX, United States
| |
Collapse
|