1
|
Juras L, Hromatko I, Vranic A. Parietal alpha and theta power predict cognitive training gains in middle-aged adults. Front Aging Neurosci 2025; 17:1530147. [PMID: 40182761 PMCID: PMC11965894 DOI: 10.3389/fnagi.2025.1530147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2024] [Accepted: 02/27/2025] [Indexed: 04/05/2025] Open
Abstract
Research on executive functions training shows inconsistent outcomes, with factors like age, baseline cognitive abilities, and personality traits implicated as predictive of training gains, while limited attention has been given to neurophysiological markers. Theta and alpha band power are linked to cognitive performance, suggesting a potential area for further study. This study aimed to determine whether relative theta and alpha power and their ratio could predict gains in updating and inhibition training beyond the practice effects (the order of training session). Forty healthy middle-aged adults (aged 49-65) were randomly assigned to either the cognitive training group (n = 20), or the communication skills (control) group (n = 20). Both groups completed the self-administered training sessions twice a week for 10 weeks, totaling to 20 sessions. Resting-state EEG data were recorded before the first session. Mixed-effects model analyses revealed that higher relative parietal alpha power positively predicted training performance, while theta power negatively predicted performance. Additionally, higher parietal alpha/theta ratio was associated with better training outcomes, while the frontal alpha/theta ratio did not demonstrate significant predictive value. Other EEG measures did not show additional predictive power beyond what was accounted for by the session effects. The findings imply that individuals with specific EEG pattern may change with cognitive training, making resting-state EEG a useful tool in tailoring interventions.
Collapse
Affiliation(s)
| | | | - Andrea Vranic
- Department of Psychology, Faculty of Humanities and Social Sciences, University of Zagreb, Zagreb, Croatia
| |
Collapse
|
2
|
Das S, Mangun GR, Ding M. Perceptual Expertise and Attention: An Exploration using Deep Neural Networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.10.15.617743. [PMID: 39464001 PMCID: PMC11507720 DOI: 10.1101/2024.10.15.617743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/29/2024]
Abstract
Perceptual expertise and attention are two important factors that enable superior object recognition and task performance. While expertise enhances knowledge and provides a holistic understanding of the environment, attention allows us to selectively focus on task-related information and suppress distraction. It has been suggested that attention operates differently in experts and in novices, but much remains unknown. This study investigates the relationship between perceptual expertise and attention using convolutional neural networks (CNNs), which are shown to be good models of primate visual pathways. Two CNN models were trained to become experts in either face or scene recognition, and the effect of attention on performance was evaluated in tasks involving complex stimuli, such as superimposed images containing superimposed faces and scenes. The goal was to explore how feature-based attention (FBA) influences recognition within and outside the domain of expertise of the models. We found that each model performed better in its area of expertise-and that FBA further enhanced task performance, but only within the domain of expertise, increasing performance by up to 35% in scene recognition, and 15% in face recognition. However, attention had reduced or negative effects when applied outside the models' expertise domain. Neural unit-level analysis revealed that expertise led to stronger tuning towards category-specific features and sharper tuning curves, as reflected in greater representational dissimilarity between targets and distractors, which, in line with the biased competition model of attention, leads to enhanced performance by reducing competition. These findings highlight the critical role of neural tuning at single as well as network level neural in distinguishing the effects of attention in experts and in novices and demonstrate that CNNs can be used fruitfully as computational models for addressing neuroscience questions not practical with the empirical methods.
Collapse
Affiliation(s)
- Soukhin Das
- Center for Mind and Brain, University of California, Davis
- Department of Psychology, University of California, Davis
| | - G R Mangun
- Center for Mind and Brain, University of California, Davis
- Department of Psychology, University of California, Davis
- Department of Neurology, University of California, Davis
| | - Mingzhou Ding
- Department of Neurology, University of California, Davis
| |
Collapse
|
3
|
van Dyck LE, Gruber WR. Modeling Biological Face Recognition with Deep Convolutional Neural Networks. J Cogn Neurosci 2023; 35:1521-1537. [PMID: 37584587 DOI: 10.1162/jocn_a_02040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/17/2023]
Abstract
Deep convolutional neural networks (DCNNs) have become the state-of-the-art computational models of biological object recognition. Their remarkable success has helped vision science break new ground, and recent efforts have started to transfer this achievement to research on biological face recognition. In this regard, face detection can be investigated by comparing face-selective biological neurons and brain areas to artificial neurons and model layers. Similarly, face identification can be examined by comparing in vivo and in silico multidimensional "face spaces." In this review, we summarize the first studies that use DCNNs to model biological face recognition. On the basis of a broad spectrum of behavioral and computational evidence, we conclude that DCNNs are useful models that closely resemble the general hierarchical organization of face recognition in the ventral visual pathway and the core face network. In two exemplary spotlights, we emphasize the unique scientific contributions of these models. First, studies on face detection in DCNNs indicate that elementary face selectivity emerges automatically through feedforward processing even in the absence of visual experience. Second, studies on face identification in DCNNs suggest that identity-specific experience and generative mechanisms facilitate this particular challenge. Taken together, as this novel modeling approach enables close control of predisposition (i.e., architecture) and experience (i.e., training data), it may be suited to inform long-standing debates on the substrates of biological face recognition.
Collapse
|
4
|
Xiao NG, Angeli V, Fang W, Manera V, Liu S, Castiello U, Ge L, Lee K, Simion F. The discrimination of expressions in facial movements by infants: A study with point-light displays. J Exp Child Psychol 2023; 232:105671. [PMID: 37003155 DOI: 10.1016/j.jecp.2023.105671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 02/27/2023] [Accepted: 02/28/2023] [Indexed: 04/03/2023]
Abstract
Perceiving facial expressions is an essential ability for infants. Although previous studies indicated that infants could perceive emotion from expressive facial movements, the developmental change of this ability remains largely unknown. To exclusively examine infants' processing of facial movements, we used point-light displays (PLDs) to present emotionally expressive facial movements. Specifically, we used a habituation and visual paired comparison (VPC) paradigm to investigate whether 3-, 6-, and 9-month-olds could discriminate between happy and fear PLDs after being habituated with a happy PLD (happy-habituation condition) or a fear PLD (fear-habituation condition). The 3-month-olds discriminated between the happy and fear PLDs in both the happy- and fear-habituation conditions. The 6- and 9-month-olds showed discrimination only in the happy-habituation condition but not in the fear-habituation condition. These results indicated a developmental change in processing expressive facial movements. Younger infants tended to process low-level motion signals regardless of the depicted emotions, and older infants tended to process expressions, which emerged in familiar facial expressions (e.g., happy). Additional analyses of individual difference and eye movement patterns supported this conclusion. In Experiment 2, we concluded that the findings of Experiment 1 were not due to a spontaneous preference for fear PLDs. Using inverted PLDs, Experiment 3 further suggested that 3-month-olds have already perceived PLDs as face-like stimuli.
Collapse
Affiliation(s)
- Naiqi G Xiao
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4L8, Canada.
| | - Valentina Angeli
- Department of Developmental and Social Psychology, University of Padova, 35131 Padova, Italy
| | - Wei Fang
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4L8, Canada
| | - Valeria Manera
- Cognition Behaviour Technology (CoBTeK), EA 7276, Edmond and Lily Safra Center, University of Nice Sophia Antipolis, 06000 Nice, France
| | - Shaoying Liu
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou 310018, China
| | - Umberto Castiello
- Department of General Psychology, University of Padova, 35131 Padova, Italy; Cognitive Neuroscience Center, University of Padova, 35131 Padova, Italy
| | - Liezhong Ge
- Center for Psychological Sciences, Zhejiang University, Hangzhou 310027, China
| | - Kang Lee
- Department of Applied Psychology and Human Development, University of Toronto, Toronto, Ontario M5R 2X2, Canada
| | - Francesca Simion
- Department of Developmental and Social Psychology, University of Padova, 35131 Padova, Italy; Cognitive Neuroscience Center, University of Padova, 35131 Padova, Italy
| |
Collapse
|
5
|
Jing M, Kadooka K, Franchak J, Kirkorian HL. The effect of narrative coherence and visual salience on children's and adults' gaze while watching video. J Exp Child Psychol 2023; 226:105562. [PMID: 36257254 DOI: 10.1016/j.jecp.2022.105562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 09/12/2022] [Accepted: 09/14/2022] [Indexed: 11/05/2022]
Abstract
Low-level visual features (e.g., motion, contrast) predict eye gaze during video viewing. The current study investigated the effect of narrative coherence on the extent to which low-level visual salience predicts eye gaze. Eye movements were recorded as 4-year-olds (n = 20) and adults (n = 20) watched a cohesive versus random sequence of video shots from a 4.5-min full vignette from Sesame Street. Overall, visual salience was a stronger predictor of gaze in adults than in children, especially when viewing a random shot sequence. The impact of narrative coherence on children's gaze was limited to the short period of time surrounding cuts to new video shots. The discussion considers potential direct effects of visual salience as well as incidental effects due to overlap between salient features and semantic content. The findings are also discussed in the context of developing video comprehension.
Collapse
Affiliation(s)
- Mengguo Jing
- Department of Human Development and Family Studies, University of Wisconsin-Madison, Madison, WI 53705, USA.
| | - Kellan Kadooka
- Department of Psychology, University of California, Riverside, Riverside, CA 92521, USA
| | - John Franchak
- Department of Psychology, University of California, Riverside, Riverside, CA 92521, USA
| | - Heather L Kirkorian
- Department of Human Development and Family Studies, University of Wisconsin-Madison, Madison, WI 53705, USA
| |
Collapse
|
6
|
DeBolt MC, Mitsven SG, Pomaranski KI, Cantrell LM, Luck SJ, Oakes LM. A new perspective on the role of physical salience in visual search: Graded effect of salience on infants' attention. Dev Psychol 2023; 59:326-343. [PMID: 36355689 PMCID: PMC9905344 DOI: 10.1037/dev0001460] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
We tested 6- and 8-month-old White and non-White infants (N = 53 total, 28 girls) from Northern California in a visual search task to determine whether a unique item in an otherwise homogeneous display (a singleton) attracts attention because it is a unique singleton and "pops out" in a categorical manner, or whether attention instead varies in a graded manner on the basis of quantitative differences in physical salience. Infants viewed arrays of four or six items; one item was a singleton and the other items were identical distractors (e.g., a single cookie and three identical toy cars). At both ages, infants looked to the singletons first more often, were faster to look at singletons, and looked longer at singletons. However, when a computational model was used to quantify the relative salience of the singleton in each display-which varied widely among the different singleton-distractor combinations-we found a strong, graded effect of physical salience on attention and no evidence that singleton status per se influenced attention. In addition, consistent with other research on attention in infancy, the effect of salience was stronger for 6-month-old infants than for 8-month-old infants. Taken together, these results show that attention-getting and attention-holding in infancy vary continuously with quantitative variations in physical salience rather than depending in a categorical manner on whether an item is unique. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
- Michaela C. DeBolt
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | | | - Katherine I. Pomaranski
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | - Lisa M. Cantrell
- Department of Child Development, California State University, Sacramento
| | - Steven J. Luck
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | - Lisa M. Oakes
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| |
Collapse
|
7
|
Bowers JS, Malhotra G, Dujmović M, Llera Montero M, Tsvetkov C, Biscione V, Puebla G, Adolfi F, Hummel JE, Heaton RF, Evans BD, Mitchell J, Blything R. Deep problems with neural network models of human vision. Behav Brain Sci 2022; 46:e385. [PMID: 36453586 DOI: 10.1017/s0140525x22002813] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models of biological vision. This conclusion is largely based on three sets of findings: (1) DNNs are more accurate than any other model in classifying images taken from various datasets, (2) DNNs do the best job in predicting the pattern of human errors in classifying objects taken from various behavioral datasets, and (3) DNNs do the best job in predicting brain signals in response to images taken from various brain datasets (e.g., single cell responses or fMRI data). However, these behavioral and brain datasets do not test hypotheses regarding what features are contributing to good predictions and we show that the predictions may be mediated by DNNs that share little overlap with biological vision. More problematically, we show that DNNs account for almost no results from psychological research. This contradicts the common claim that DNNs are good, let alone the best, models of human object recognition. We argue that theorists interested in developing biologically plausible models of human vision need to direct their attention to explaining psychological findings. More generally, theorists need to build models that explain the results of experiments that manipulate independent variables designed to test hypotheses rather than compete on making the best predictions. We conclude by briefly summarizing various promising modeling approaches that focus on psychological data.
Collapse
Affiliation(s)
- Jeffrey S Bowers
- School of Psychological Science, University of Bristol, Bristol, UK ; https://jeffbowers.blogs.bristol.ac.uk/
| | - Gaurav Malhotra
- School of Psychological Science, University of Bristol, Bristol, UK ; https://jeffbowers.blogs.bristol.ac.uk/
| | - Marin Dujmović
- School of Psychological Science, University of Bristol, Bristol, UK ; https://jeffbowers.blogs.bristol.ac.uk/
| | - Milton Llera Montero
- School of Psychological Science, University of Bristol, Bristol, UK ; https://jeffbowers.blogs.bristol.ac.uk/
| | - Christian Tsvetkov
- School of Psychological Science, University of Bristol, Bristol, UK ; https://jeffbowers.blogs.bristol.ac.uk/
| | - Valerio Biscione
- School of Psychological Science, University of Bristol, Bristol, UK ; https://jeffbowers.blogs.bristol.ac.uk/
| | - Guillermo Puebla
- School of Psychological Science, University of Bristol, Bristol, UK ; https://jeffbowers.blogs.bristol.ac.uk/
| | - Federico Adolfi
- School of Psychological Science, University of Bristol, Bristol, UK ; https://jeffbowers.blogs.bristol.ac.uk/
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt am Main, Germany
| | - John E Hummel
- Department of Psychology, University of Illinois Urbana-Champaign, Champaign, IL, USA
| | - Rachel F Heaton
- Department of Psychology, University of Illinois Urbana-Champaign, Champaign, IL, USA
| | - Benjamin D Evans
- Department of Informatics, School of Engineering and Informatics, University of Sussex, Brighton, UK
| | - Jeffrey Mitchell
- Department of Informatics, School of Engineering and Informatics, University of Sussex, Brighton, UK
| | - Ryan Blything
- School of Psychology, Aston University, Birmingham, UK
| |
Collapse
|
8
|
Ayzenberg V, Lourenco S. Perception of an object's global shape is best described by a model of skeletal structure in human infants. eLife 2022; 11:e74943. [PMID: 35612898 PMCID: PMC9132572 DOI: 10.7554/elife.74943] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 05/09/2022] [Indexed: 11/13/2022] Open
Abstract
Categorization of everyday objects requires that humans form representations of shape that are tolerant to variations among exemplars. Yet, how such invariant shape representations develop remains poorly understood. By comparing human infants (6-12 months; N=82) to computational models of vision using comparable procedures, we shed light on the origins and mechanisms underlying object perception. Following habituation to a never-before-seen object, infants classified other novel objects across variations in their component parts. Comparisons to several computational models of vision, including models of high-level and low-level vision, revealed that infants' performance was best described by a model of shape based on the skeletal structure. Interestingly, infants outperformed a range of artificial neural network models, selected for their massive object experience and biological plausibility, under the same conditions. Altogether, these findings suggest that robust representations of shape can be formed with little language or object experience by relying on the perceptually invariant skeletal structure.
Collapse
Affiliation(s)
| | - Stella Lourenco
- Department of Psychology, Emory UniversityAtlantaUnited States
| |
Collapse
|
9
|
Abstract
Categorization is the basis of thinking and reasoning. Through the analysis of infants’ gaze, we describe the trajectory through which visual object representations in infancy incrementally match categorical object representations as mapped onto adults’ visual cortex. Using a methodological approach that allows for a comparison of findings obtained with behavioral and brain measures in infants and adults, we identify the transition from visual exploration guided by perceptual salience to an organization of objects by categories, which begins with the animate–inanimate distinction in the first months of life and continues with a spurt of biologically relevant categories (human bodies, nonhuman bodies, nonhuman faces, small natural objects) through the second year of life. Humans make sense of the world by organizing things into categories. When and how does this process begin? We investigated whether real-world object categories that spontaneously emerge in the first months of life match categorical representations of objects in the human visual cortex. Using eye tracking, we measured the differential looking time of 4-, 10-, and 19-mo-olds as they looked at pairs of pictures belonging to eight animate or inanimate categories (human/nonhuman, faces/bodies, real-world size big/small, natural/artificial). Taking infants’ looking times as a measure of similarity, for each age group, we defined a representational space where each object was defined in relation to others of the same or of a different category. This space was compared with hypothesis-based and functional MRI-based models of visual object categorization in the adults’ visual cortex. Analyses across different age groups showed that, as infants grow older, their looking behavior matches neural representations in ever-larger portions of the adult visual cortex, suggesting progressive recruitment and integration of more and more feature spaces distributed over the visual cortex. Moreover, the results characterize infants’ visual categorization as an incremental process with two milestones. Between 4 and 10 mo, visual exploration guided by saliency gives way to an organization according to the animate–inanimate distinction. Between 10 and 19 mo, a category spurt leads toward a mature organization. We propose that these changes underlie the coupling between seeing and thinking in the developing mind.
Collapse
|