1
|
Szpiro SFA, Burlingham CS, Simoncelli EP, Carrasco M. Perceptual learning improves discrimination but does not reduce distortions in appearance. PLoS Comput Biol 2025; 21:e1012980. [PMID: 40233123 DOI: 10.1371/journal.pcbi.1012980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Accepted: 03/20/2025] [Indexed: 04/17/2025] Open
Abstract
Human perceptual sensitivity often improves with training, a phenomenon known as "perceptual learning." Another important perceptual dimension is appearance, the subjective sense of stimulus magnitude. Are training-induced improvements in sensitivity accompanied by more accurate appearance? Here, we examined this question by measuring both discrimination (sensitivity) and estimation (appearance) responses to near-horizontal motion directions, which are known to be repulsed away from horizontal. Participants performed discrimination and estimation tasks before and after training in either the discrimination or the estimation task or none (control group). Human observers who trained in either discrimination or estimation exhibited improvements in discrimination accuracy, but estimation repulsion did not decrease; instead, it either persisted or increased. Hence, distortions in perception can be exacerbated after perceptual learning. We developed a computational observer model in which perceptual learning arises from increases in the precision of underlying neural representations, which explains this counterintuitive finding. For each observer, the fitted model accounted for discrimination performance, the distribution of estimates, and their changes with training. Our empirical findings and modeling suggest that learning enhances distinctions between categories, a potentially important aspect of real-world perception and perceptual learning.
Collapse
Affiliation(s)
- Sarit F A Szpiro
- Department of Special Education, Faculty of Education, University of Haifa, The Edmond J. Safra Brain Research Center, University of Haifa, Haifa, Israel
| | - Charlie S Burlingham
- Department of Psychology, New York University, New York, New York, United States of America
| | - Eero P Simoncelli
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Science, New York University, New York, New York, United States of America
- Courant Institute of Mathematical Sciences, New York University, New York, New York, United States of America
- Flatiron Institute, Simons Foundation, New York, New York, United States of America
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Science, New York University, New York, New York, United States of America
| |
Collapse
|
2
|
Kvam PD. The Tweedledum and Tweedledee of dynamic decisions: Discriminating between diffusion decision and accumulator models. Psychon Bull Rev 2025; 32:588-613. [PMID: 39354295 PMCID: PMC12000211 DOI: 10.3758/s13423-024-02587-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/11/2024] [Indexed: 10/04/2024]
Abstract
Theories of dynamic decision-making are typically built on evidence accumulation, which is modeled using racing accumulators or diffusion models that track a shifting balance of support over time. However, these two types of models are only two special cases of a more general evidence accumulation process where options correspond to directions in an accumulation space. Using this generalized evidence accumulation approach as a starting point, I identify four ways to discriminate between absolute-evidence and relative-evidence models. First, an experimenter can look at the information that decision-makers considered to identify whether there is a filtering of near-zero evidence samples, which is characteristic of a relative-evidence decision rule (e.g., diffusion decision model). Second, an experimenter can disentangle different components of drift rates by manipulating the discriminability of the two response options relative to the stimulus to delineate the balance of evidence from the total amount of evidence. Third, a modeler can use machine learning to classify a set of data according to its generative model. Finally, machine learning can also be used to directly estimate the geometric relationships between choice options. I illustrate these different approaches by applying them to data from an orientation-discrimination task, showing converging conclusions across all four methods in favor of accumulator-based representations of evidence during choice. These tools can clearly delineate absolute-evidence and relative-evidence models, and should be useful for comparing many other types of decision theories.
Collapse
|
3
|
Cheng YA, Sanayei M, Chen X, Jia K, Li S, Fang F, Watanabe T, Thiele A, Zhang RY. A neural geometry approach comprehensively explains apparently conflicting models of visual perceptual learning. Nat Hum Behav 2025:10.1038/s41562-025-02149-x. [PMID: 40164913 DOI: 10.1038/s41562-025-02149-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2024] [Accepted: 02/20/2025] [Indexed: 04/02/2025]
Abstract
Visual perceptual learning (VPL), defined as long-term improvement in a visual task, is considered a crucial tool for elucidating underlying visual and brain plasticity. Previous studies have proposed several neural models of VPL, including changes in neural tuning or in noise correlations. Here, to adjudicate different models, we propose that all neural changes at single units can be conceptualized as geometric transformations of population response manifolds in a high-dimensional neural space. Following this neural geometry approach, we identified neural manifold shrinkage due to reduced trial-by-trial population response variability, rather than tuning or correlation changes, as the primary mechanism of VPL. Furthermore, manifold shrinkage successfully explains VPL effects across artificial neural responses in deep neural networks, multivariate blood-oxygenation-level-dependent signals in humans and multiunit activities in monkeys. These converging results suggest that our neural geometry approach comprehensively explains a wide range of empirical results and reconciles previously conflicting models of VPL.
Collapse
Affiliation(s)
- Yu-Ang Cheng
- Brain Health Institute, National Center for Mental Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine and School of Psychology, Shanghai, People's Republic of China
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI, USA
| | - Mehdi Sanayei
- Biosciences Institute, Newcastle University, Framlington Place, Newcastle upon Tyne, UK
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences, Tehran, Iran
| | - Xing Chen
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Ke Jia
- Affiliated Mental Health Center and Hangzhou Seventh People's Hospital, Zhejiang University School of Medicine, Hangzhou, People's Republic of China
- Liangzhu Laboratory, MOE Frontier Science Center for Brain Science and Brain-machine Integration, State Key Laboratory of Brain-Machine Intelligence, Zhejiang University, Hangzhou, People's Republic of China
- NHC and CAMS Key Laboratory of Medical Neurobiology, Zhejiang University, Hangzhou, People's Republic of China
| | - Sheng Li
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, People's Republic of China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing, People's Republic of China
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, People's Republic of China
| | - Fang Fang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, People's Republic of China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing, People's Republic of China
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, People's Republic of China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, People's Republic of China
| | - Takeo Watanabe
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI, USA
| | - Alexander Thiele
- Biosciences Institute, Newcastle University, Framlington Place, Newcastle upon Tyne, UK
| | - Ru-Yuan Zhang
- Brain Health Institute, National Center for Mental Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine and School of Psychology, Shanghai, People's Republic of China.
| |
Collapse
|
4
|
Hong B, Chen J, Huang W, Li L. Serial Dependence in Smooth Pursuit Eye Movements of Preadolescent Children and Adults. Invest Ophthalmol Vis Sci 2024; 65:37. [PMID: 39728694 DOI: 10.1167/iovs.65.14.37] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2024] Open
Abstract
Purpose Serial dependence refers to the attraction of current perceptual responses toward previously seen stimuli. Despite extensive research on serial dependence, fundamental questions, such as how serial dependence changes with development, whether it affects the perception of sensory input, and what qualifies as serial dependence, remain unresolved. The current study aims to address these questions. Methods We tested 81 children (8-9 years) and 77 adults (18-30 years) with an ocular tracking task in which participants used their eyes to track a target moving in a specific direction on each trial. This task examined both the open-loop (pursuit initiation) and closed-loop (steady-state tracking) smooth pursuit eye movements. Results We found an attractive bias in pursuit direction toward previously seen target motion direction during pursuit initiation but not sustained pursuit in both children and adults. Such a bias displayed both feature- and temporal-tuning characteristics of serial dependence, showed oblique-cardinal directional anisotropy, and was more pronounced in children than adults. The greater effect of serial dependence around oblique than cardinal directions and its increased magnitude in children compared to adults can be explained by the larger variability in pursuit direction around oblique directions and in children, as predicted by the Bayesian framework. Conclusions Serial dependence in smooth pursuit occurs early during pursuit initiation when the response is driven by the perception of sensory input. Age-related changes in serial dependence reflect the fine-tuning of general brain functions, enhancing precision in tracking a moving target and thus reducing serial dependence effects.
Collapse
Affiliation(s)
- Bao Hong
- School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at New York University Shanghai, Shanghai, China
| | - Jing Chen
- NYU-ECNU Institute of Brain and Cognitive Science at New York University Shanghai, Shanghai, China
- Faculty of Arts and Science, New York University Shanghai, Shanghai, China
- Institute of Brain and Education Innovation, East China Normal University, Shanghai, China
- Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai, China
| | - Wenjun Huang
- School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at New York University Shanghai, Shanghai, China
| | - Li Li
- School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at New York University Shanghai, Shanghai, China
- Faculty of Arts and Science, New York University Shanghai, Shanghai, China
- Institute of Brain and Education Innovation, East China Normal University, Shanghai, China
- Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai, China
| |
Collapse
|
5
|
Song Y, Wang Q, Fang F. Time courses of brain plasticity underpinning visual motion perceptual learning. Neuroimage 2024; 302:120897. [PMID: 39442899 DOI: 10.1016/j.neuroimage.2024.120897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 10/10/2024] [Accepted: 10/21/2024] [Indexed: 10/25/2024] Open
Abstract
Visual perceptual learning (VPL) refers to a long-term improvement of visual task performance through training or experience, reflecting brain plasticity even in adults. In human subjects, VPL has been mostly studied using functional magnetic resonance imaging (fMRI). However, due to the low temporal resolution of fMRI, how VPL affects the time course of visual information processing is largely unknown. To address this issue, we trained human subjects to perform a visual motion direction discrimination task. Their behavioral performance and magnetoencephalography (MEG) signals responding to the motion stimuli were measured before, immediately after, and two weeks after training. Training induced a long-lasting behavioral improvement for the trained direction. Based on the MEG signals from occipital sensors, we found that, for the trained motion direction, VPL increased the motion direction decoding accuracy, reduced the motion direction decoding latency, enhanced the direction-selective channel response, and narrowed the tuning profile. Following the MEG source reconstruction, we showed that VPL enhanced the cortical response in early visual cortex (EVC) and strengthened the feedforward connection from EVC to V3A. These VPL-induced neural changes co-occurred in 160-230 ms after stimulus onset. Complementary to previous fMRI findings on VPL, this study provides a comprehensive description on the neural mechanisms of visual motion perceptual learning from a temporal perspective and reveals how VPL shapes the time course of visual motion processing in the adult human brain.
Collapse
Affiliation(s)
- Yongqian Song
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China; IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China; Peking-Tsinghua Center for Life Sciences, Peking University, Beijing 100871, China
| | - Qian Wang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China; IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China; National Key Laboratory of General Artificial Intelligence, Peking University, Beijing 100871, China
| | - Fang Fang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China; IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China; Peking-Tsinghua Center for Life Sciences, Peking University, Beijing 100871, China; Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing 100871, China.
| |
Collapse
|
6
|
Ransom M. The perceptual learning of socially constructed kinds: how culture biases and shapes perception. PHILOSOPHICAL STUDIES 2024; 181:3113-3133. [DOI: 10.1007/s11098-024-02211-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/18/2024] [Indexed: 01/05/2025]
|
7
|
Willis HE, Farenthold B, Millington-Truby RS, Willis R, Starling L, Cavanaugh M, Tamietto M, Huxlin K, Bridge H. Persistence of training-induced visual improvements after occipital stroke. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.10.24.24316036. [PMID: 39502666 PMCID: PMC11537328 DOI: 10.1101/2024.10.24.24316036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/14/2024]
Abstract
Damage to the primary visual cortex causes homonymous visual impairments that appear to benefit from visual discrimination training. However, whether improvements persist without continued training remains to be determined and was the focus of the present study. After a baseline assessment visit, 20 participants trained twice daily in their blind-field for a minimum of six months (median=155 sessions), using a motion discrimination and integration task. At the end of training, a return study visit was used to assess recovery. Three months later, 14 of the participants returned for a third study visit to assess persistence of recovery. At each study visit, motion discrimination and integration thresholds, Humphrey visual fields, and structural MRI scans were collected. Immediately after training, all but four participants showed improvements in the trained discrimination task, and shrinkage of the perimetrically-defined visual defect. While these gains were sustained in seven out of eleven participants who improved with training, four participants lost their improvement in motion discrimination thresholds at the follow-up visit. Persistence of recovery was not related to age, time since lesion, number of training sessions performed, proportion of V1 damaged, deficit size, or optic tract degeneration measured from structural MRI scans. The present findings underscore the potential of extended visual training to induce long-term improvements in stroke-induced vision loss. However, they also highlight the need for further investigations to better understand the mechanisms driving recovery, its persistence post-training, and especially heterogeneity among participants.
Collapse
Affiliation(s)
- Hanna E Willis
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, United Kingdom, OX3 9DU
| | - Berkeley Farenthold
- Flaum Eye Institute and Center for Visual Science, University of Rochester, Rochester, NY 14642, USA
| | - Rebecca S Millington-Truby
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, United Kingdom, OX3 9DU
| | - Rebecca Willis
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, United Kingdom, OX3 9DU
| | - Lucy Starling
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, United Kingdom, OX3 9DU
| | - Matthew Cavanaugh
- Flaum Eye Institute and Center for Visual Science, University of Rochester, Rochester, NY 14642, USA
| | - Marco Tamietto
- Department of Psychology, University of Torino, 10123 Torino, Italy
- Department of Medical and Clinical Psychology, Tilburg University, The Netherlands
| | - Krystel Huxlin
- Flaum Eye Institute and Center for Visual Science, University of Rochester, Rochester, NY 14642, USA
| | - Holly Bridge
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, United Kingdom, OX3 9DU
| |
Collapse
|
8
|
Consorti A, Sansevero G, Di Marco I, Floridia S, Novelli E, Berardi N, Sale A. An essential role for the latero-medial secondary visual cortex in the acquisition and retention of visual perceptual learning in mice. Nat Commun 2024; 15:7322. [PMID: 39183324 PMCID: PMC11345418 DOI: 10.1038/s41467-024-51817-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 08/15/2024] [Indexed: 08/27/2024] Open
Abstract
Perceptual learning refers to any change in discrimination abilities as a result of practice, a fundamental process that improves the organism's response to the external environment. Visual perceptual learning (vPL) is supposed to rely on functional rearrangements in brain circuity occurring at early stages of sensory processing, with a pivotal role for the primary visual cortex (V1). However, top-down inputs from higher-order visual areas (HVAs) have been suggested to play a key part in vPL, conveying information on attention, expectation and the precise nature of the perceptual task. A direct assessment of the possibility to modulate vPL by manipulating top-down activity in awake subjects is still missing. Here, we used a combination of chemogenetics, behavioral analysis and multichannel electrophysiological assessments to show a critical role in vPL acquisition and retention for neuronal activity in the latero-medial secondary visual cortex (LM), the prime source for top-down feedback projections reentering V1.
Collapse
Affiliation(s)
- Alan Consorti
- Neuroscience Institute, National Research Council (CNR), Pisa, Italy
- NEUROFARBA, University of Florence, 50139, Florence, Italy
| | | | - Irene Di Marco
- Neuroscience Institute, National Research Council (CNR), Pisa, Italy
- NEUROFARBA, University of Florence, 50139, Florence, Italy
| | - Silvia Floridia
- Neuroscience Institute, National Research Council (CNR), Pisa, Italy
| | - Elena Novelli
- Neuroscience Institute, National Research Council (CNR), Pisa, Italy
| | - Nicoletta Berardi
- Neuroscience Institute, National Research Council (CNR), Pisa, Italy
- NEUROFARBA, University of Florence, 50139, Florence, Italy
| | - Alessandro Sale
- Neuroscience Institute, National Research Council (CNR), Pisa, Italy.
| |
Collapse
|
9
|
Liu J, Lu ZL, Dosher B. Transfer of visual perceptual learning over a task-irrelevant feature through feature-invariant representations: Behavioral experiments and model simulations. J Vis 2024; 24:17. [PMID: 38916886 PMCID: PMC11205231 DOI: 10.1167/jov.24.6.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 05/04/2024] [Indexed: 06/26/2024] Open
Abstract
A large body of literature has examined specificity and transfer of perceptual learning, suggesting a complex picture. Here, we distinguish between transfer over variations in a "task-relevant" feature (e.g., transfer of a learned orientation task to a different reference orientation) and transfer over a "task-irrelevant" feature (e.g., transfer of a learned orientation task to a different retinal location or different spatial frequency), and we focus on the mechanism for the latter. Experimentally, we assessed whether learning a judgment of one feature (such as orientation) using one value of an irrelevant feature (e.g., spatial frequency) transfers to another value of the irrelevant feature. Experiment 1 examined whether learning in eight-alternative orientation identification with one or multiple spatial frequencies transfers to stimuli at five different spatial frequencies. Experiment 2 paralleled Experiment 1, examining whether learning in eight-alternative spatial-frequency identification at one or multiple orientations transfers to stimuli with five different orientations. Training the orientation task with a single spatial frequency transferred widely to all other spatial frequencies, with a tendency to specificity when training with the highest spatial frequency. Training the spatial frequency task fully transferred across all orientations. Computationally, we extended the identification integrated reweighting theory (I-IRT) to account for the transfer data (Dosher, Liu, & Lu, 2023; Liu, Dosher, & Lu, 2023). Just as location-invariant representations in the original IRT explain transfer over retinal locations, incorporating feature-invariant representations effectively accounted for the observed transfer. Taken together, we suggest that feature-invariant representations can account for transfer of learning over a "task-irrelevant" feature.
Collapse
Affiliation(s)
- Jiajuan Liu
- Department of Cognitive Sciences, University of California, Irvine, CA, USA
| | - Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China
- Center for Neural Sciences and Department of Psychology, New York University, New York, NY, USA
- NYU-ECNU Institute of Brain and Cognitive Science, Shanghai, China
| | - Barbara Dosher
- Department of Cognitive Sciences, University of California, Irvine, CA, USA
| |
Collapse
|
10
|
Bennett PJ, Hashemi A, Lass JW, Sekuler AB, Hussain Z. The time course of stimulus-specific perceptual learning. J Vis 2024; 24:9. [PMID: 38602837 PMCID: PMC11019584 DOI: 10.1167/jov.24.4.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 01/31/2024] [Indexed: 04/13/2024] Open
Abstract
Practice on perceptual tasks can lead to long-lasting, stimulus-specific improvements. Rapid stimulus-specific learning, assessed 24 hours after practice, has been found with just 105 practice trials in a face identification task. However, a much longer time course for stimulus-specific learning has been found in other tasks. Here, we examined 1) whether rapid stimulus-specific learning occurs for unfamiliar, non-face stimuli in a texture identification task; 2) the effects of varying practice across a range from just 21 trials up to 840 trials; and 3) if rapid, stimulus-specific learning persists over a 1-week, as well as a 1-day, interval. Observers performed a texture identification task in two sessions separated by one day (Experiment 1) or 1 week (Experiment 2). Observers received varying amounts of practice (21, 63, 105, or 840 training trials) in session 1 and completed 840 trials in session 2. In session 2, one-half of the observers in each group performed the task with the same textures as in session 1, and one-half switched to novel textures (same vs. novel conditions). In both experiments we found that stimulus-specific learning - defined as the difference in response accuracy in the same and novel conditions - increased as a linear function of the log number of session 1 training trials and was statistically significant after approximately 100 training trials. The effects of stimulus novelty did not differ across experiments. These results support the idea that stimulus-specific learning in our task arises gradually and continuously through practice, perhaps concurrently with general learning.
Collapse
Affiliation(s)
- Patrick J Bennett
- Department of Psychology, Neuroscience, and Behaviour, McMaster University, Hamilton, Canada
| | - Ali Hashemi
- Department of Psychology, Neuroscience, and Behaviour, McMaster University, Hamilton, Canada
| | - Jordan W Lass
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Canada
| | - Allison B Sekuler
- Department of Psychology, Neuroscience, and Behaviour, McMaster University, Hamilton, Canada
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Canada
- Department of Psychology, University of Toronto, Toronto, Canada
| | - Zahra Hussain
- School of Psychology, University of Plymouth, Plymouth, UK
| |
Collapse
|
11
|
Shen S, Sun Y, Lu J, Li C, Chen Q, Mo C, Fang F, Zhang X. Profiles of visual perceptual learning in feature space. iScience 2024; 27:109128. [PMID: 38384835 PMCID: PMC10879700 DOI: 10.1016/j.isci.2024.109128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 01/22/2024] [Accepted: 02/01/2024] [Indexed: 02/23/2024] Open
Abstract
Visual perceptual learning (VPL), experience-induced gains in discriminating visual features, has been studied extensively and intensively for many years, its profile in feature space, however, remains unclear. Here, human subjects were trained to perform either a simple low-level feature (grating orientation) or a complex high-level object (face view) discrimination task over a long-time course. During, immediately after, and one month after training, all results showed that in feature space VPL in grating orientation discrimination was a center-surround profile; VPL in face view discrimination, however, was a monotonic gradient profile. Importantly, these two profiles can be emerged by a deep convolutional neural network with a modified AlexNet consisted of 7 and 12 layers, respectively. Altogether, our study reveals for the first time a feature hierarchy-dependent profile of VPL in feature space, placing a necessary constraint on our understanding of the neural computation of VPL.
Collapse
Affiliation(s)
- Shiqi Shen
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Yueling Sun
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Jiachen Lu
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Chu Li
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Qinglin Chen
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Ce Mo
- Department of Psychology, Sun-YatSen University, Guangzhou, Guangdong 510275, China
| | - Fang Fang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing 100871, China
| | - Xilin Zhang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, Guangdong 510631, China
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, Guangdong 510631, China
| |
Collapse
|
12
|
Karaduman A, Karoglu-Eravsar ET, Adams MM, Kafaligonul H. Passive exposure to visual motion leads to short-term changes in the optomotor response of aging zebrafish. Behav Brain Res 2024; 460:114812. [PMID: 38104637 DOI: 10.1016/j.bbr.2023.114812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 12/10/2023] [Accepted: 12/10/2023] [Indexed: 12/19/2023]
Abstract
Numerous studies have shown that prior visual experiences play an important role in sensory processing and adapting behavior in a dynamic environment. A repeated and passive presentation of visual stimulus is one of the simplest procedures to manipulate acquired experiences. Using this approach, we aimed to investigate exposure-based visual learning of aging zebrafish and how cholinergic intervention is involved in exposure-induced changes. Our measurements included younger and older wild-type zebrafish and achesb55/+ mutants with decreased acetylcholinesterase activity. We examined both within-session and across-day changes in the zebrafish optomotor responses to repeated and passive exposure to visual motion. Our findings revealed short-term (within-session) changes in the magnitude of optomotor response (i.e., the amount of position shift by fish as a response to visual motion) rather than long-term and persistent effects across days. Moreover, the observed short-term changes were age- and genotype-dependent. Compared to the initial presentations of motion within a session, the magnitude of optomotor response to terminal presentations decreased in the older zebrafish. There was a similar robust decrease specific to achesb55/+ mutants. Taken together, these results point to short-term (within-session) alterations in the motion detection of adult zebrafish and suggest differential effects of neural aging and cholinergic system on the observed changes. These findings further provide important insights into adult zebrafish optomotor response to visual motion and contribute to understanding this reflexive behavior in the short- and long-term stimulation profiles.
Collapse
Affiliation(s)
- Aysenur Karaduman
- Interdisciplinary Neuroscience Program, Aysel Sabuncu Brain Research Center, Bilkent University, Ankara, Türkiye; Department of Molecular Biology and Genetics Zebrafish Facility, Bilkent University, Ankara, Türkiye; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Türkiye
| | - Elif Tugce Karoglu-Eravsar
- Interdisciplinary Neuroscience Program, Aysel Sabuncu Brain Research Center, Bilkent University, Ankara, Türkiye; Department of Molecular Biology and Genetics Zebrafish Facility, Bilkent University, Ankara, Türkiye; National Nanotechnology Research Center (UNAM), Bilkent University, Ankara, Türkiye; Department of Psychology, Selcuk University, Konya, Türkiye
| | - Michelle M Adams
- Interdisciplinary Neuroscience Program, Aysel Sabuncu Brain Research Center, Bilkent University, Ankara, Türkiye; Department of Molecular Biology and Genetics Zebrafish Facility, Bilkent University, Ankara, Türkiye; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Türkiye; National Nanotechnology Research Center (UNAM), Bilkent University, Ankara, Türkiye; Department of Psychology, Bilkent University, Ankara, Türkiye
| | - Hulusi Kafaligonul
- Interdisciplinary Neuroscience Program, Aysel Sabuncu Brain Research Center, Bilkent University, Ankara, Türkiye; Department of Molecular Biology and Genetics Zebrafish Facility, Bilkent University, Ankara, Türkiye; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Türkiye; National Nanotechnology Research Center (UNAM), Bilkent University, Ankara, Türkiye.
| |
Collapse
|
13
|
Bi T, Luo W, Wu J, Shao B, Tan Q, Kou H. Effect of facial emotion recognition learning transfers across emotions. Front Psychol 2024; 15:1310101. [PMID: 38312392 PMCID: PMC10834736 DOI: 10.3389/fpsyg.2024.1310101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 01/05/2024] [Indexed: 02/06/2024] Open
Abstract
Introduction Perceptual learning of facial expression is shown specific to the train expression, indicating separate encoding of the emotional contents in different expressions. However, little is known about the specificity of emotional recognition training with the visual search paradigm and the sensitivity of learning to near-threshold stimuli. Methods In the present study, we adopted a visual search paradigm to measure the recognition of facial expressions. In Experiment 1 (Exp1), Experiment 2 (Exp2), and Experiment 3 (Exp3), subjects were trained for 8 days to search for a target expression in an array of faces presented for 950 ms, 350 ms, and 50 ms, respectively. In Experiment 4 (Exp4), we trained subjects to search for a target of a triangle, and tested them with the task of facial expression search. Before and after the training, subjects were tested on the trained and untrained facial expressions which were presented for 950 ms, 650 ms, 350 ms, or 50 ms. Results The results showed that training led to large improvements in the recognition of facial emotions only if the faces were presented long enough (Exp1: 85.89%; Exp2: 46.05%). Furthermore, the training effect could transfer to the untrained expression. However, when the faces were presented briefly (Exp3), the training effect was small (6.38%). In Exp4, the results indicated that the training effect could not transfer across categories. Discussion Our findings revealed cross-emotion transfer for facial expression recognition training in a visual search task. In addition, learning hardly affects the recognition of near-threshold expressions.
Collapse
Affiliation(s)
- Taiyong Bi
- Research Center of Humanities and Medicine, Zunyi Medical University, Zunyi, China
| | - Wei Luo
- The Institute of Ethnology and Anthropology, Chinese Academy of Social Sciences, Beijing, China
| | - Jia Wu
- Research Center of Humanities and Medicine, Zunyi Medical University, Zunyi, China
| | - Boyao Shao
- Research Center of Humanities and Medicine, Zunyi Medical University, Zunyi, China
| | - Qingli Tan
- Research Center of Humanities and Medicine, Zunyi Medical University, Zunyi, China
| | - Hui Kou
- Research Center of Humanities and Medicine, Zunyi Medical University, Zunyi, China
| |
Collapse
|
14
|
Gong L, Zhao J, Dai Y, Wang Z, Hou F, Zhang Y, Lu ZL, Zhou J. Improving iconic memory through contrast detection training with HOA-corrected vision. FUNDAMENTAL RESEARCH 2024; 4:95-102. [PMID: 38933850 PMCID: PMC11197569 DOI: 10.1016/j.fmre.2022.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 05/13/2022] [Accepted: 06/06/2022] [Indexed: 11/23/2022] Open
Abstract
Iconic memory and short-term memory are not only crucial for perception and cognition, but also of great importance to mental health. Here, we first showed that both types of memory could be improved by improving limiting processes in visual processing through perceptual learning. Normal adults were trained in a contrast detection task for ten days, with their higher-order aberrations (HOA) corrected in real-time. We found that the training improved not only their contrast sensitivity function (CSF), but also their iconic memory and baseline information maintenance for short-term memory, and the relationship between memory and CSF improvements could be well-predicted by an observer model. These results suggest that training the limiting component of a cognitive task with visual perceptual learning could improve visual cognition. They may also provide an empirical foundation for new therapies to treat people with poor sensory memory.
Collapse
Affiliation(s)
- Ling Gong
- State Key Laboratory of Ophthalmology, Optometry and Vision Science, Eye Hospital, School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou 325027, China
| | - Junlei Zhao
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
- The Key Laboratory of Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China
| | - Yun Dai
- Chengdu University of Traditional Chinese Medicine, Chengdu 610075, China
| | - Zili Wang
- State Key Laboratory of Ophthalmology, Optometry and Vision Science, Eye Hospital, School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou 325027, China
| | - Fang Hou
- State Key Laboratory of Ophthalmology, Optometry and Vision Science, Eye Hospital, School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou 325027, China
| | - Yudong Zhang
- The Key Laboratory of Adaptive Optics, Chinese Academy of Sciences, Chengdu 610209, China
| | - Zhong-Lin Lu
- Division of Arts and Sciences, New York University Shanghai, Shanghai 200126, China
- Center for Neural Science, Department of Psychology, New York University, New York 10003, United States
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China
| | - Jiawei Zhou
- State Key Laboratory of Ophthalmology, Optometry and Vision Science, Eye Hospital, School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou 325027, China
| |
Collapse
|
15
|
Tan Q, Sasaki Y, Watanabe T. Geometric-relationship specific transfer in visual perceptual learning. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.07.570648. [PMID: 38106111 PMCID: PMC10723461 DOI: 10.1101/2023.12.07.570648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Visual perceptual learning (VPL) is defined as long-term improvement on a visual task as a result of visual experience. In many cases, the improvement is highly specific to the location where the target is presented, which refers to location specificity. In the current study, we investigated the effect of a geometrical relationship between the trained location and an untrained location on transfer of VPL. We found that significant transfer occurs either diagonally or along a line passing the fixation point. This indicates that whether location specificity or location transfer occurs at least partially depends on the geometrical relationship between trained location and an untrained location.
Collapse
|
16
|
Chetverikov A, Jehee JFM. Motion direction is represented as a bimodal probability distribution in the human visual cortex. Nat Commun 2023; 14:7634. [PMID: 37993430 PMCID: PMC10665457 DOI: 10.1038/s41467-023-43251-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 11/03/2023] [Indexed: 11/24/2023] Open
Abstract
Humans infer motion direction from noisy sensory signals. We hypothesize that to make these inferences more precise, the visual system computes motion direction not only from velocity but also spatial orientation signals - a 'streak' created by moving objects. We implement this hypothesis in a Bayesian model, which quantifies knowledge with probability distributions, and test its predictions using psychophysics and fMRI. Using a probabilistic pattern-based analysis, we decode probability distributions of motion direction from trial-by-trial activity in the human visual cortex. Corroborating the predictions, the decoded distributions have a bimodal shape, with peaks that predict the direction and magnitude of behavioral errors. Interestingly, we observe similar bimodality in the distribution of the observers' behavioral responses across trials. Together, these results suggest that observers use spatial orientation signals when estimating motion direction. More broadly, our findings indicate that the cortical representation of low-level visual features, such as motion direction, can reflect a combination of several qualitatively distinct signals.
Collapse
Affiliation(s)
- Andrey Chetverikov
- Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, The Netherlands.
- Department of Psychosocial Science, Faculty of Psychology, University of Bergen, Bergen, Norway.
| | - Janneke F M Jehee
- Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, The Netherlands.
| |
Collapse
|
17
|
Shi Y, Zhang J, Lin W, Chung-Fat-Yim A, Yang Q, Li H. The effect of training on sensitivity and stability of double fusion in Panum's limiting case. Atten Percept Psychophys 2023; 85:2894-2906. [PMID: 37831363 DOI: 10.3758/s13414-023-02795-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/14/2023] [Indexed: 10/14/2023]
Abstract
Panum's limiting case is a phenomenon of monocular occlusion in binocular vision. This occurs when one object is occluded by the other object for one eye, but the two objects are both visible for the other eye. Although previous studies have found that vertical gradient of horizontal disparity and cue conflict are two important factors for double fusion, the effect of training on the sensitivity and stability of Panum's limiting case remains unknown. The current study trained 26 participants for 5 days with several of Panum's configurations (Gilliam, Frisby, and Wang series). The latency and duration of double fusion were recorded to examine the effects of training on sensitivity and stability of double fusion in Panum's limiting case. For each level of vertical gradient of horizontal disparity and cue conflict, the latency of double fusion decreased and the duration of double fusion increased with each additional training session. The results showed that vertical gradient of horizontal disparity and cue conflict interacted, and the duration of high cue conflict was significantly shorter than that of medium and low cue conflict for each level of vertical gradient of horizontal disparity. The findings suggest that there is an effect of training for vertical gradient of horizontal disparity and cue conflict in Panum's limiting case, and that the three factors jointly affect the sensitivity and stability of double fusion.
Collapse
Affiliation(s)
- Yuyu Shi
- School of Psychology, Zhejiang Normal University, Jinhua, 321004, China
- Key Laboratory of Intelligent Education Technology and Application, Zhejiang Normal University, Jinhua, China
| | - Jiaxi Zhang
- School of Psychology, Zhejiang Normal University, Jinhua, 321004, China
- Key Laboratory of Intelligent Education Technology and Application, Zhejiang Normal University, Jinhua, China
| | - Wenmin Lin
- School of English Studies, Shanghai International Studies University, Shanghai, China
| | - Ashley Chung-Fat-Yim
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Qihang Yang
- College of Foreign Language, Zhejiang Normal University, Jinhua, China
| | - Huayun Li
- School of Psychology, Zhejiang Normal University, Jinhua, 321004, China.
- Key Laboratory of Intelligent Education Technology and Application, Zhejiang Normal University, Jinhua, China.
| |
Collapse
|
18
|
Grzeczkowski L, Shi Z, Rolfs M, Deubel H. Perceptual learning across saccades: Feature but not location specific. Proc Natl Acad Sci U S A 2023; 120:e2303763120. [PMID: 37844238 PMCID: PMC10614914 DOI: 10.1073/pnas.2303763120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 09/13/2023] [Indexed: 10/18/2023] Open
Abstract
Perceptual learning is the ability to enhance perception through practice. The hallmark of perceptual learning is its specificity for the trained location and stimulus features, such as orientation. For example, training in discriminating a grating's orientation improves performance only at the trained location but not in other untrained locations. Perceptual learning has mostly been studied using stimuli presented briefly while observers maintained gaze at one location. However, in everyday life, stimuli are actively explored through eye movements, which results in successive projections of the same stimulus at different retinal locations. Here, we studied perceptual learning of orientation discrimination across saccades. Observers were trained to saccade to a peripheral grating and to discriminate its orientation change that occurred during the saccade. The results showed that training led to transsaccadic perceptual learning (TPL) and performance improvements which did not generalize to an untrained orientation. Remarkably, however, for the trained orientation, we found a complete transfer of TPL to the untrained location in the opposite hemifield suggesting high flexibility of reference frame encoding in TPL. Three control experiments in which participants were trained without saccades did not show such transfer, confirming that the location transfer was contingent upon eye movements. Moreover, performance at the trained location, but not at the untrained location, was also improved in an untrained fixation task. Our results suggest that TPL has both, a location-specific component that occurs before the eye movement and a saccade-related component that involves location generalization.
Collapse
Affiliation(s)
- Lukasz Grzeczkowski
- Allgemeine und Experimentelle Psychologie, Department Psychologie, Ludwig-Maximilians-Universität, Munich80802, Germany
- Department Psychologie, Humboldt-Universität zu Berlin, Berlin12489, Germany
| | - Zhuanghua Shi
- Allgemeine und Experimentelle Psychologie, Department Psychologie, Ludwig-Maximilians-Universität, Munich80802, Germany
| | - Martin Rolfs
- Department Psychologie, Humboldt-Universität zu Berlin, Berlin12489, Germany
| | - Heiner Deubel
- Allgemeine und Experimentelle Psychologie, Department Psychologie, Ludwig-Maximilians-Universität, Munich80802, Germany
| |
Collapse
|
19
|
Melnik N, Pollmann S. Efficient versus inefficient visual search as training for saccadic re-referencing to an extrafoveal location. J Vis 2023; 23:13. [PMID: 37733339 PMCID: PMC10517419 DOI: 10.1167/jov.23.10.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 08/14/2023] [Indexed: 09/22/2023] Open
Abstract
Central vision loss is one of the leading causes of visual impairment in the elderly and its frequency is increasing. Without formal training, patients adopt an unaffected region of the retina as a new fixation location, a preferred retinal locus (PRL). However, learning to use the PRL as a reference location for saccades, that is, saccadic re-referencing, is protracted and time-consuming. Recent studies showed that training with visual search tasks can expedite this process. However, visual search can be driven by salient external features - leading to efficient search, or by internal goals, usually leading to inefficient, attention-demanding search. We compared saccadic re-referencing training in the presence of a simulated central scotoma with either an efficient or an inefficient visual search task. Participants had to respond by fixating the target with an experimenter-defined retinal location in the lower visual field. We observed that comparable relative training gains were obtained in both tasks for a number of behavioral parameters, with higher training gains for the trained task, compared to the untrained task. The transfer to the untrained task was only observed for some parameters. Our findings thus confirm and extend previous research showing comparable efficiency for exogenously and endogenously driven visual search tasks for saccadic re-referencing training. Our results also show that transfer of training gains to related tasks may be limited and needs to be tested for saccadic re-referencing-training paradigms to assess its suitability as a training tool for patients.
Collapse
Affiliation(s)
- Natalia Melnik
- Department of Psychology, Otto-von-Guericke University, Magdeburg, Germany
| | - Stefan Pollmann
- Department of Psychology, Otto-von-Guericke University, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Otto-von-Guericke University, Magdeburg, Germany
| |
Collapse
|
20
|
Huang Z, Niu Z, Li S. Reactivation-induced memory integration prevents proactive interference in perceptual learning. J Vis 2023; 23:1. [PMID: 37129883 PMCID: PMC10158987 DOI: 10.1167/jov.23.5.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/03/2023] Open
Abstract
We acquire perceptual skills through experience to adapt ourselves to the changing environment. Accomplishing an effective skill acquisition is a main purpose of perceptual learning research. Given the often observed learning effect specificity, multiple perceptual learnings with shared parameters could serve to improve the generalization of the learning effect. However, the interference between the overlapping memory traces of different learnings may impede this effort. Here, we trained human participants on an orientation discrimination task. We observed a proactive interference effect that the first training blocked the second training at its untrained location. This was a more pronounced effect than the well-known location specificity in perceptual learning. We introduced a short reactivation of the first training before the second training and successfully eliminated the proactive interference when the second training was inside the reconsolidation time window of the reactivated first training. Interestingly, we found that practicing an irrelevant task at the location of the second training immediately after the reactivation of the first training could also restore the effect of the second training but in a smaller magnitude, even if the second training was conducted outside of the reconsolidation window. We proposed a two-level mechanism of reactivation-induced memory integration to account for these results. The reactivation-based procedure could integrate either the previously trained and untrained locations or the two trainings at these locations, depending on the activated representations during the reconsolidation process. The findings provide us with new insight into the roles of long-term memory mechanisms in perceptual learning.
Collapse
Affiliation(s)
- Zhibang Huang
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, China
| | - Zhimei Niu
- Department of Psychology, University of Texas at Austin, Austin, TX, USA
| | - Sheng Li
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, China
| |
Collapse
|
21
|
Marris JE, Perfors A, Mitchell D, Wang W, McCusker MW, Lovell TJH, Gibson RN, Gaillard F, Howe PDL. Evaluating the effectiveness of different perceptual training methods in a difficult visual discrimination task with ultrasound images. Cogn Res Princ Implic 2023; 8:19. [PMID: 36940041 PMCID: PMC10027970 DOI: 10.1186/s41235-023-00467-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 01/16/2023] [Indexed: 03/21/2023] Open
Abstract
Recent work has shown that perceptual training can be used to improve the performance of novices in real-world visual classification tasks with medical images, but it is unclear which perceptual training methods are the most effective, especially for difficult medical image discrimination tasks. We investigated several different perceptual training methods with medically naïve participants in a difficult radiology task: identifying the degree of hepatic steatosis (fatty infiltration of the liver) in liver ultrasound images. In Experiment 1a (N = 90), participants completed four sessions of standard perceptual training, and participants in Experiment 1b (N = 71) completed four sessions of comparison training. There was a significant post-training improvement for both types of training, although performance was better when the trained task aligned with the task participants were tested on. In both experiments, performance initially improves rapidly, with learning becoming more gradual after the first training session. In Experiment 2 (N = 200), we explored the hypothesis that performance could be improved by combining perceptual training with explicit annotated feedback presented in a stepwise fashion. Although participants improved in all training conditions, performance was similar regardless of whether participants were given annotations, or underwent training in a stepwise fashion, both, or neither. Overall, we found that perceptual training can rapidly improve performance on a difficult radiology task, albeit not to a comparable level as expert performance, and that similar levels of performance were achieved across the perceptual training paradigms we compared.
Collapse
Affiliation(s)
- Jessica E Marris
- Melbourne School of Psychological Sciences, University of Melbourne, Parkville, Australia.
| | - Andrew Perfors
- Melbourne School of Psychological Sciences, University of Melbourne, Parkville, Australia
| | - David Mitchell
- Radiology, Sligo University Hospital, Sligo, Ireland
- Department of Radiology, The Royal Melbourne Hospital, Parkville, Australia
| | - Wayland Wang
- Department of Radiology, The Royal Melbourne Hospital, Parkville, Australia
| | - Mark W McCusker
- Department of Radiology, The Royal Melbourne Hospital, Parkville, Australia
- Department of Radiology, University of Melbourne, Parkville, Australia
| | | | - Robert N Gibson
- Department of Radiology, The Royal Melbourne Hospital, Parkville, Australia
- Department of Radiology, University of Melbourne, Parkville, Australia
| | - Frank Gaillard
- Department of Radiology, The Royal Melbourne Hospital, Parkville, Australia
- Department of Radiology, University of Melbourne, Parkville, Australia
| | - Piers D L Howe
- Melbourne School of Psychological Sciences, University of Melbourne, Parkville, Australia
| |
Collapse
|
22
|
Bang JW, Hamilton-Fletcher G, Chan KC. Visual Plasticity in Adulthood: Perspectives from Hebbian and Homeostatic Plasticity. Neuroscientist 2023; 29:117-138. [PMID: 34382456 PMCID: PMC9356772 DOI: 10.1177/10738584211037619] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
The visual system retains profound plastic potential in adulthood. In the current review, we summarize the evidence of preserved plasticity in the adult visual system during visual perceptual learning as well as both monocular and binocular visual deprivation. In each condition, we discuss how such evidence reflects two major cellular mechanisms of plasticity: Hebbian and homeostatic processes. We focus on how these two mechanisms work together to shape plasticity in the visual system. In addition, we discuss how these two mechanisms could be further revealed in future studies investigating cross-modal plasticity in the visual system.
Collapse
Affiliation(s)
- Ji Won Bang
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
| | - Giles Hamilton-Fletcher
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
| | - Kevin C. Chan
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Department of Radiology, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Neuroscience Institute, NYU Grossman School of Medicine, NYU Langone Health, New York University, New York, NY, USA
- Center for Neural Science, College of Arts and Science, New York University, New York, NY, USA
| |
Collapse
|
23
|
Clayton R, Siderov J. Differences in stereoacuity between crossed and uncrossed disparities reduce with practice. Ophthalmic Physiol Opt 2022; 42:1353-1362. [PMID: 35997266 PMCID: PMC9804356 DOI: 10.1111/opo.13040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 07/22/2022] [Accepted: 07/22/2022] [Indexed: 01/05/2023]
Abstract
INTRODUCTION Stereoacuity, like many forms of hyperacuity, improves with practice. We investigated the effects of repeated measurements over multiple visits on stereoacuity using two commonly utilised clinical stereotests, for both crossed and uncrossed disparity stimuli. METHODS Participants were adults with normal binocular vision (n = 17) aged between 18 and 50 years. Stereoacuity was measured using the Randot and TNO stereotests on five separate occasions over a six week period. We utilised both crossed and uncrossed stimuli to separately evaluate stereoacuity in both disparity directions. A subset of the subject group also completed a further five visits over an additional six week period. Threshold stereoacuity was determined by the lowest disparity level at which the subjects could correctly identify both the position and disparity direction (crossed or uncrossed) of the stimulus. Data were analysed by repeated measures analysis of variance. RESULTS Stereoacuity for crossed and uncrossed stimuli improved significantly across the first five visits (F1,21 = 4.24, p = 0.05). The main effect of disparity direction on stereoacuity was not significant (F1 = 0.02, p = 0.91). However, a significant interaction between disparity direction and stereotest was identified (F1 = 7.92, p = 0.01). CONCLUSIONS Stereoacuity measured with both the TNO and Randot stereotests improved significantly over the course of five repetitions. Although differences between crossed and uncrossed stereoacuity were evident, they depended on the stereotest used and reduced or disappeared after repeated measurements. A single measure of stereoacuity is inadequate for properly evaluating adult stereopsis clinically.
Collapse
Affiliation(s)
- Robin Clayton
- Centre for Vision across the Life Span, Department of Optometry and Vision SciencesUniversity of HuddersfieldHuddersfieldUK
| | - John Siderov
- Centre for Vision across the Life Span, Department of Optometry and Vision SciencesUniversity of HuddersfieldHuddersfieldUK
| |
Collapse
|
24
|
Lu ZL, Dosher BA. Current directions in visual perceptual learning. NATURE REVIEWS PSYCHOLOGY 2022; 1:654-668. [PMID: 37274562 PMCID: PMC10237053 DOI: 10.1038/s44159-022-00107-2] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/16/2022] [Indexed: 06/06/2023]
Abstract
The visual expertise of adult humans is jointly determined by evolution, visual development, and visual perceptual learning. Perceptual learning refers to performance improvements in perceptual tasks after practice or training in the task. It occurs in almost all visual tasks, ranging from simple feature detection to complex scene analysis. In this Review, we focus on key behavioral aspects of visual perceptual learning. We begin by describing visual perceptual learning tasks and manipulations that influence the magnitude of learning, and then discuss specificity of learning. Next, we present theories and computational models of learning and specificity. We then review applications of visual perceptual learning in visual rehabilitation. Finally, we summarize the general principles of visual perceptual learning, discuss the tension between plasticity and stability, and conclude with new research directions.
Collapse
Affiliation(s)
- Zhong-Lin Lu
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China
- Center for Neural Science, New York University, New York, NY, USA
- Department of Psychology, New York University, New York, NY, USA
- Institute of Brain and Cognitive Science, New York University - East China Normal University, Shanghai, China
| | | |
Collapse
|
25
|
Visual neuroscience: A shrewd look at perceptual learning. Curr Biol 2022; 32:R839-R841. [PMID: 35944484 DOI: 10.1016/j.cub.2022.07.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
A new study provides insight into the neuronal mechanisms that underlie visual learning in the tree shrew, revealing how improved coding for trained stimuli in visual cortex can negatively affect the perception of other stimuli.
Collapse
|
26
|
Abstract
Vision and learning have long been considered to be two areas of research linked only distantly. However, recent developments in vision research have changed the conceptual definition of vision from a signal-evaluating process to a goal-oriented interpreting process, and this shift binds learning, together with the resulting internal representations, intimately to vision. In this review, we consider various types of learning (perceptual, statistical, and rule/abstract) associated with vision in the past decades and argue that they represent differently specialized versions of the fundamental learning process, which must be captured in its entirety when applied to complex visual processes. We show why the generalized version of statistical learning can provide the appropriate setup for such a unified treatment of learning in vision, what computational framework best accommodates this kind of statistical learning, and what plausible neural scheme could feasibly implement this framework. Finally, we list the challenges that the field of statistical learning faces in fulfilling the promise of being the right vehicle for advancing our understanding of vision in its entirety. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- József Fiser
- Department of Cognitive Science, Center for Cognitive Computation, Central European University, Vienna 1100, Austria;
| | - Gábor Lengyel
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627, USA
| |
Collapse
|
27
|
Perspectives on the Combined Use of Electric Brain Stimulation and Perceptual Learning in Vision. Vision (Basel) 2022; 6:vision6020033. [PMID: 35737420 PMCID: PMC9227313 DOI: 10.3390/vision6020033] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Revised: 06/07/2022] [Accepted: 06/08/2022] [Indexed: 11/29/2022] Open
Abstract
A growing body of literature offers exciting perspectives on the use of brain stimulation to boost training-related perceptual improvements in humans. Recent studies suggest that combining visual perceptual learning (VPL) training with concomitant transcranial electric stimulation (tES) leads to learning rate and generalization effects larger than each technique used individually. Both VPL and tES have been used to induce neural plasticity in brain regions involved in visual perception, leading to long-lasting visual function improvements. Despite being more than a century old, only recently have these techniques been combined in the same paradigm to further improve visual performance in humans. Nonetheless, promising evidence in healthy participants and in clinical population suggests that the best could still be yet to come for the combined use of VPL and tES. In the first part of this perspective piece, we briefly discuss the history, the characteristics, the results and the possible mechanisms behind each technique and their combined effect. In the second part, we discuss relevant aspects concerning the use of these techniques and propose a perspective concerning the combined use of electric brain stimulation and perceptual learning in the visual system, closing with some open questions on the topic.
Collapse
|
28
|
Severe distortion in the representation of foveal visual image locations in short-term memory. Proc Natl Acad Sci U S A 2022; 119:e2121860119. [PMID: 35675430 PMCID: PMC9214507 DOI: 10.1073/pnas.2121860119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The foveal visual image region provides the human visual system with the highest acuity. However, it is unclear whether such a high fidelity representational advantage is maintained when foveal image locations are committed to short-term memory. Here, we describe a paradoxically large distortion in foveal target location recall by humans. We briefly presented small, but high contrast, points of light at eccentricities ranging from 0.1 to 12°, while subjects maintained their line of sight on a stable target. After a brief memory period, the subjects indicated the remembered target locations via computer controlled cursors. The biggest localization errors, in terms of both directional deviations and amplitude percentage overshoots or undershoots, occurred for the most foveal targets, and such distortions were still present, albeit with qualitatively different patterns, when subjects shifted their gaze to indicate the remembered target locations. Foveal visual images are severely distorted in short-term memory.
Collapse
|
29
|
Klorfeld-Auslender S, Paz Y, Shinder I, Rosenblatt J, Dinstein I, Censor N. A distinct route for efficient learning and generalization in autism. Curr Biol 2022; 32:3203-3209.e3. [PMID: 35700734 DOI: 10.1016/j.cub.2022.05.059] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 04/06/2022] [Accepted: 05/26/2022] [Indexed: 10/18/2022]
Abstract
Visual skill learning is the process of improving responses to surrounding visual stimuli.1 For individuals with autism spectrum disorders (ASDs), efficient skill learning may be especially valuable due to potential difficulties with sensory processing2 and challenges in adjusting flexibly to changing environments.3,4 Standard skill learning protocols require extensive practice with multiple stimulus repetitions,5-7 which may be difficult for individuals with ASD and create abnormally specific learning with poor ability to generalize.4 Motivated by findings indicating that brief memory reactivations can facilitate skill learning,8,9 we hypothesized that reactivation learning with few stimulus repetitions will enable efficient learning in individuals with ASD, similar to their learning with standard extensive practice protocols used in previous studies.4,10,11 We further hypothesized that in contrast to experience-dependent plasticity often resulting in specificity, reactivation-induced learning would enable generalization patterns in ASD. To test our hypotheses, high-functioning adults with ASD underwent brief reactivations of an encoded visual learning task, consisting of only 5 trials each instead of hundreds. Remarkably, individuals with ASD improved their visual discrimination ability in the task substantially, demonstrating successful learning. Furthermore, individuals with ASD generalized learning to an untrained visual location, indicating a unique benefit of reactivation learning mechanisms for ASD individuals. Finally, an additional experiment showed that without memory reactivations ASD subjects did not demonstrate efficient learning and generalization patterns. Taken together, the results provide proof-of-concept evidence supporting a distinct route for efficient visual learning and generalization in ASD, which may be beneficial for skill learning in other sensory and motor domains.
Collapse
Affiliation(s)
- Shira Klorfeld-Auslender
- School of Psychological Sciences, Tel Aviv University, Tel Aviv 69978, Israel; Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 69978, Israel
| | - Yaniv Paz
- Cognitive and Brain Science Department, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel; Zlotowsky Center for Neuroscience, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel
| | - Ilana Shinder
- School of Psychological Sciences, Tel Aviv University, Tel Aviv 69978, Israel
| | - Jonathan Rosenblatt
- Zlotowsky Center for Neuroscience, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel; Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel
| | - Ilan Dinstein
- Cognitive and Brain Science Department, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel; Department of Psychology, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel; Azrieli National Center for Autism and Neurodevelopment Research, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel
| | - Nitzan Censor
- School of Psychological Sciences, Tel Aviv University, Tel Aviv 69978, Israel; Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 69978, Israel.
| |
Collapse
|
30
|
Donato R, Pavan A, Cavallin G, Ballan L, Betteto L, Nucci M, Campana G. Mechanisms Underlying Directional Motion Processing and Form-Motion Integration Assessed with Visual Perceptual Learning. Vision (Basel) 2022; 6:vision6020029. [PMID: 35737415 PMCID: PMC9229663 DOI: 10.3390/vision6020029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 05/17/2022] [Accepted: 05/27/2022] [Indexed: 11/18/2022] Open
Abstract
Dynamic Glass patterns (GPs) are visual stimuli commonly employed to study form–motion interactions. There is brain imaging evidence that non-directional motion induced by dynamic GPs and directional motion induced by random dot kinematograms (RDKs) depend on the activity of the human motion complex (hMT+). However, whether dynamic GPs and RDKs rely on the same processing mechanisms is still up for dispute. The current study uses a visual perceptual learning (VPL) paradigm to try to answer this question. Identical pre- and post-tests were given to two groups of participants, who had to discriminate random/noisy patterns from coherent form (dynamic GPs) and motion (RDKs). Subsequently, one group was trained on dynamic translational GPs, whereas the other group on RDKs. On the one hand, the generalization of learning to the non-trained stimulus would indicate that the same mechanisms are involved in the processing of both dynamic GPs and RDKs. On the other hand, learning specificity would indicate that the two stimuli are likely to be processed by separate mechanisms possibly in the same cortical network. The results showed that VPL is specific to the stimulus trained, suggesting that directional and non-directional motion may depend on different neural mechanisms.
Collapse
Affiliation(s)
- Rita Donato
- Dipartimento di Psicologia Generale, University of Padova, Via Venezia 8, 35131 Padova, Italy; (L.B.); (M.N.); (G.C.)
- Human Inspired Technology Research Centre, University of Padova, Via Luzzati 4, 35121 Padova, Italy;
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Colégio de Jesus, Rua Inácio Duarte 65, 3000-481 Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Rua Colégio Novo, 3000-115 Coimbra, Portugal
- Correspondence: (R.D.); (A.P.)
| | - Andrea Pavan
- Dipartimento di Psicologia, University of Bologna, Viale Berti Pichat, 5, 40127 Bologna, Italy
- Correspondence: (R.D.); (A.P.)
| | - Giovanni Cavallin
- Dipartimento di Matematica, University of Padova, Via Trieste 63, 35121 Padova, Italy;
| | - Lamberto Ballan
- Human Inspired Technology Research Centre, University of Padova, Via Luzzati 4, 35121 Padova, Italy;
- Dipartimento di Matematica, University of Padova, Via Trieste 63, 35121 Padova, Italy;
| | - Luca Betteto
- Dipartimento di Psicologia Generale, University of Padova, Via Venezia 8, 35131 Padova, Italy; (L.B.); (M.N.); (G.C.)
| | - Massimo Nucci
- Dipartimento di Psicologia Generale, University of Padova, Via Venezia 8, 35131 Padova, Italy; (L.B.); (M.N.); (G.C.)
- Human Inspired Technology Research Centre, University of Padova, Via Luzzati 4, 35121 Padova, Italy;
| | - Gianluca Campana
- Dipartimento di Psicologia Generale, University of Padova, Via Venezia 8, 35131 Padova, Italy; (L.B.); (M.N.); (G.C.)
- Human Inspired Technology Research Centre, University of Padova, Via Luzzati 4, 35121 Padova, Italy;
| |
Collapse
|
31
|
Cochrane A, Ruba AL, Lovely A, Kane-Grade FE, Duerst A, Pollak SD. Perceptual learning is robust to manipulations of valence and arousal in childhood and adulthood. PLoS One 2022; 17:e0266258. [PMID: 35439260 PMCID: PMC9017894 DOI: 10.1371/journal.pone.0266258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Accepted: 03/18/2022] [Indexed: 11/18/2022] Open
Abstract
Despite clear links between affective processes in many areas of cognition and perception, the influence of affective valence and arousal on low-level perceptual learning have remained largely unexplored. Such influences could have the potential to disrupt or enhance learning that would have long-term consequences for young learners. The current study manipulated 8- to 11-year-old children's and young adults' mood using video clips (to induce a positive mood) or a psychosocial stressor (to induce a negative mood). Each participant then completed one session of a low-level visual learning task (visual texture paradigm). Using novel computational methods, we did not observe evidence for the modulation of visual perceptual learning by manipulations of emotional arousal or valence in either children or adults. The majority of results supported a model of perceptual learning that is overwhelmingly constrained to the task itself and independent from external factors such as variations in learners' affect.
Collapse
Affiliation(s)
- Aaron Cochrane
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Geneva, Switzerland
| | - Ashley L. Ruba
- Department of Psychology, University of Wisconsin–Madison, Madison, Wisconsin, United States of America
| | - Alyssa Lovely
- Department of Psychology, University of Wisconsin–Madison, Madison, Wisconsin, United States of America
| | - Finola E. Kane-Grade
- Institute of Child Development, University of Minnesota, Minneapolis, Minnesota, United States of America
| | - Abigail Duerst
- Homer Stryker M.D. School of Medicine, Western Michigan University, Kalamazoo, Michigan, United States of America
| | - Seth D. Pollak
- Department of Psychology, University of Wisconsin–Madison, Madison, Wisconsin, United States of America
| |
Collapse
|
32
|
Yang P, Saunders JA, Chen Z. The experience of stereoblindness does not improve use of texture for slant perception. J Vis 2022; 22:3. [PMID: 35412556 PMCID: PMC9012895 DOI: 10.1167/jov.22.5.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Stereopsis is an important depth cue for normal people, but a subset of people suffer from stereoblindness and cannot use binocular disparity as a cue to depth. Does this experience of stereoblindness modulate use of other depth cues? We investigated this question by comparing perception of 3D slant from texture for stereoblind people and stereo-normal people. Subjects performed slant discrimination and slant estimation tasks using both monocular and binocular stimuli. We found that two groups had comparable ability to discriminate slant from texture information and showed similar mappings between texture information and slant perception (biased perception toward frontal surface with texture information indicating low slants). The results suggest that the experience of stereoblindness did not change the use of texture information for slant perception. In addition, we found that stereoblind people benefitted from binocular viewing in the slant estimation task, despite their inability to use binocular disparity information. These findings are generally consistent with the optimal cue combination model of slant perception.
Collapse
Affiliation(s)
- Pin Yang
- Shanghai Key Laboratory of Brain Functional Genomics, Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,
| | | | - Zhongting Chen
- Shanghai Key Laboratory of Brain Functional Genomics, Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,Shanghai Changning Mental Health Center, Shanghai, China.,
| |
Collapse
|
33
|
Haddara N, Rahnev D. The Impact of Feedback on Perceptual Decision-Making and Metacognition: Reduction in Bias but No Change in Sensitivity. Psychol Sci 2022; 33:259-275. [PMID: 35100069 PMCID: PMC9096460 DOI: 10.1177/09567976211032887] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
It is widely believed that feedback improves behavior, but the mechanisms behind this improvement remain unclear. Different theories postulate that feedback has either a direct effect on performance through automatic reinforcement mechanisms or only an indirect effect mediated by a deliberate change in strategy. To adjudicate between these competing accounts, we performed two large experiments on human adults (total N = 518); approximately half the participants received trial-by-trial feedback on a perceptual task, whereas the other half did not receive any feedback. We found that feedback had no effect on either perceptual or metacognitive sensitivity even after 7 days of training. On the other hand, feedback significantly affected participants' response strategies by reducing response bias and improving confidence calibration. These results suggest that the beneficial effects of feedback stem from allowing people to adjust their strategies for performing the task and not from direct reinforcement mechanisms, at least in the domain of perception.
Collapse
Affiliation(s)
- Nadia Haddara
- Nadia Haddara, Georgia Institute of
Technology, School of Psychology
| | | |
Collapse
|
34
|
Yu D. Training peripheral vision to read: Using stimulus exposure and identity priming. Front Neurosci 2022; 16:916447. [PMID: 36090292 PMCID: PMC9451508 DOI: 10.3389/fnins.2022.916447] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Accepted: 07/26/2022] [Indexed: 11/26/2022] Open
Abstract
Reading in the periphery can be improved with perceptual learning. A conventional training paradigm involves repeated practice on a character-based task (e.g., recognizing random letters/words). While the training is effective, the hours of strenuous effort required from the trainees makes it difficult to implement the training in low-vision patients. Here, we developed a training paradigm utilizing stimulus exposure and identity priming to minimize training effort and improve training accessibility while maintaining the active engagement of observers through a stimulus visibility task. Twenty-one normally sighted young adults were randomly assigned to three groups: a control group, a with-repetition training group, and a without-repetition training group. All observers received a pre-test and a post-test scheduled 1 week apart. Each test consisted of measurements of reading speed, visual-span profile, the spatial extent of crowding, and isolated-letter profiles at 10° eccentricity in the lower visual field. Training consists of five daily sessions (a total of 7,150 trials) of viewing trigram stimuli (strings of three letters) with identity priming (prior knowledge of target letter identity). The with-repetition group was given the option to replay each stimulus (averaged 0.4 times). In comparison to the control group, both training groups showed significant improvements in all four performance measures. Stimulus replay did not yield a measurable benefit on learning. Learning transferred to various untrained tasks and conditions, such as the reading task and untrained letter size. Reduction in crowding was the main basis of the training-related improvement in reading. We also found that the learning can be partially retained for a minimum of 3 months and that complete retention is attainable with additional monthly training. Our findings suggest that conventional training task that requires recognizing random letters or words is dispensable for improving peripheral reading. Utilizing stimulus exposure and identity priming accompanied by a stimulus visibility task, our novel training procedure offers effective intervention, simple implementation, capability for remote and self-administration, and an easy translation into low-vision reading rehabilitation.
Collapse
Affiliation(s)
- Deyue Yu
- College of Optometry, The Ohio State University, Columbus, OH, United States
| |
Collapse
|
35
|
Raffin E, Witon A, Salamanca-Giron RF, Huxlin KR, Hummel FC. Functional Segregation within the Dorsal Frontoparietal Network: A Multimodal Dynamic Causal Modeling Study. Cereb Cortex 2021; 32:3187-3205. [PMID: 34864941 DOI: 10.1093/cercor/bhab409] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 10/12/2021] [Accepted: 10/15/2021] [Indexed: 12/27/2022] Open
Abstract
Discrimination and integration of motion direction requires the interplay of multiple brain areas. Theoretical accounts of perception suggest that stimulus-related (i.e., exogenous) and decision-related (i.e., endogenous) factors affect distributed neuronal processing at different levels of the visual hierarchy. To test these predictions, we measured brain activity of healthy participants during a motion discrimination task, using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI). We independently modeled the impact of exogenous factors (task demand) and endogenous factors (perceptual decision-making) on the activity of the motion discrimination network and applied Dynamic Causal Modeling (DCM) to both modalities. DCM for event-related potentials (DCM-ERP) revealed that task demand impacted the reciprocal connections between the primary visual cortex (V1) and medial temporal areas (V5). With practice, higher visual areas were increasingly involved, as revealed by DCM-fMRI. Perceptual decision-making modulated higher levels (e.g., V5-to-Frontal Eye Fields, FEF), in a manner predictive of performance. Our data suggest that lower levels of the visual network support early, feature-based selection of responses, especially when learning strategies have not been implemented. In contrast, perceptual decision-making operates at higher levels of the visual hierarchy by integrating sensory information with the internal state of the subject.
Collapse
Affiliation(s)
- Estelle Raffin
- Defitech Chair in Clinical Neuroengineering, Center for Neuroprosthetics and Brain Mind Institute, EPFL, Geneva CH-1201, Switzerland.,Defitech Chair in Clinical Neuroengineering, Center for Neuroprosthetics and Brain Mind Institute, Clinique Romande de Readaptation (CRR), EPFL Valais, Sion CH-1950, Switzerland
| | - Adrien Witon
- Defitech Chair in Clinical Neuroengineering, Center for Neuroprosthetics and Brain Mind Institute, EPFL, Geneva CH-1201, Switzerland.,Defitech Chair in Clinical Neuroengineering, Center for Neuroprosthetics and Brain Mind Institute, Clinique Romande de Readaptation (CRR), EPFL Valais, Sion CH-1950, Switzerland.,Health IT, IT Department, Hôpital du Valais, Sion, Switzerland
| | - Roberto F Salamanca-Giron
- Defitech Chair in Clinical Neuroengineering, Center for Neuroprosthetics and Brain Mind Institute, EPFL, Geneva CH-1201, Switzerland.,Defitech Chair in Clinical Neuroengineering, Center for Neuroprosthetics and Brain Mind Institute, Clinique Romande de Readaptation (CRR), EPFL Valais, Sion CH-1950, Switzerland
| | - Krystel R Huxlin
- The Flaum Eye Institute and Center for Visual Science, University of Rochester, Rochester, NY-14642, USA
| | - Friedhelm C Hummel
- Defitech Chair in Clinical Neuroengineering, Center for Neuroprosthetics and Brain Mind Institute, EPFL, Geneva CH-1201, Switzerland.,Defitech Chair in Clinical Neuroengineering, Center for Neuroprosthetics and Brain Mind Institute, Clinique Romande de Readaptation (CRR), EPFL Valais, Sion CH-1950, Switzerland.,Clinical Neuroscience, University of Geneva Medical School, Geneva CH-1205, Switzerland
| |
Collapse
|
36
|
Cochrane A, Green CS. Assessing the functions underlying learning using by-trial and by-participant models: Evidence from two visual perceptual learning paradigms. J Vis 2021; 21:5. [PMID: 34905053 PMCID: PMC8684311 DOI: 10.1167/jov.21.13.5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Inferred mechanisms of learning, such as those involved in improvements resulting from perceptual training, are reliant on (and reflect) the functional forms that models of learning take. However, previous investigations of the functional forms of perceptual learning have been limited in ways that are incompatible with the known mechanisms of learning. For instance, previous work has overwhelmingly aggregated learning data across learning participants, learning trials, or both. Here we approach the study of the functional form of perceptual learning on the by-person and by-trial levels at which the mechanisms of learning are expected to act. Each participant completed one of two visual perceptual learning tasks over the course of two days, with the first 75% of task performance using a single reference stimulus (i.e., "training") and the last 25% using an orthogonal reference stimulus (to test generalization). Five learning functions, coming from either the exponential or the power family, were fit to each participant's data. The exponential family was uniformly supported by Bayesian Information Criteria (BIC) model comparisons. The simplest exponential function was the best fit to learning on a texture oddball detection task, while a Weibull (augmented exponential) function tended to be the best fit to learning on a dot-motion discrimination task. The support for the exponential family corroborated previous by-person investigations of the functional form of learning, while the novel evidence supporting the Weibull learning model has implications for both the analysis and the mechanistic bases of the learning.
Collapse
Affiliation(s)
- Aaron Cochrane
- Faculty of Psychology and Education Sciences, University of Geneva, Geneva, Switzerland.,
| | - C Shawn Green
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA.,
| |
Collapse
|
37
|
Perquin MN, Taylor M, Lorusso J, Kolasinski J. Directional biases in whole hand motion perception revealed by mid-air tactile stimulation. Cortex 2021; 142:221-236. [PMID: 34280867 PMCID: PMC8422163 DOI: 10.1016/j.cortex.2021.03.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Revised: 12/31/2020] [Accepted: 03/30/2021] [Indexed: 11/22/2022]
Abstract
Many emerging technologies are attempting to leverage the tactile domain to convey complex spatiotemporal information translated directly from the visual domain, such as shape and motion. Despite the intuitive appeal of touch for communication, we do not know to what extent the hand can substitute for the retina in this way. Here we ask whether the tactile system can be used to perceive complex whole hand motion stimuli, and whether it exhibits the same kind of established perceptual biases as reported in the visual domain. Using ultrasound stimulation, we were able to project complex moving dot percepts onto the palm in mid-air, over 30 cm above an emitter device. We generated dot kinetogram stimuli involving motion in three different directional axes ('Horizontal', 'Vertical', and 'Oblique') on the ventral surface of the hand. Using Bayesian statistics, we found clear evidence that participants were able to discriminate tactile motion direction. Furthermore, there was a marked directional bias in motion perception: participants were both better and more confident at discriminating motion in the vertical and horizontal axes of the hand, compared to those stimuli moving obliquely. This pattern directly mirrors the perceptional biases that have been robustly reported in the visual field, termed the 'Oblique Effect'. These data demonstrate the existence of biases in motion perception that transcend sensory modality. Furthermore, we extend the Oblique Effect to a whole hand scale, using motion stimuli presented on the broad and relatively low acuity surface of the palm, away from the densely innervated and much studied fingertips. These findings highlight targeted ultrasound stimulation as a versatile method to convey potentially complex spatial and temporal information without the need for a user to wear or touch a device.
Collapse
Affiliation(s)
- Marlou N Perquin
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, UK; Biopsychology & Cognitive Neuroscience, Faculty of Psychology and Sports Science, Bielefeld University, Germany; Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Germany.
| | - Mason Taylor
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, UK
| | - Jarred Lorusso
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, UK; School of Biological Sciences, University of Manchester, Manchester, UK
| | - James Kolasinski
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, UK
| |
Collapse
|
38
|
Cretenoud AF, Barakat A, Milliet A, Choung OH, Bertamini M, Constantin C, Herzog MH. How do visual skills relate to action video game performance? J Vis 2021; 21:10. [PMID: 34269794 PMCID: PMC8297421 DOI: 10.1167/jov.21.7.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
It has been claimed that video gamers possess increased perceptual and cognitive skills compared to non-video gamers. Here, we examined to which extent gaming performance in CS:GO (Counter-Strike: Global Offensive) correlates with visual performance. We tested 94 players ranging from beginners to experts with a battery of visual paradigms, such as visual acuity and contrast detection. In addition, we assessed performance in specific gaming skills, such as shooting and tracking, and administered personality traits. All measures together explained about 70% of the variance of the players’ rank. In particular, regression models showed that a few visual abilities, such as visual acuity in the periphery and the susceptibility to the Honeycomb illusion, were strongly associated with the players’ rank. Although the causality of the effect remains unknown, our results show that high-rank players perform better in certain visual skills compared to low-rank players.
Collapse
Affiliation(s)
- Aline F Cretenoud
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.,
| | - Arthur Barakat
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.,Laboratory of Behavioral Genetics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.,Logitech Europe S.A., Innovation Park EPFL, Lausanne, Switzerland.,
| | - Alain Milliet
- Logitech Europe S.A., Innovation Park EPFL, Lausanne, Switzerland.,
| | - Oh-Hyeon Choung
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.,
| | - Marco Bertamini
- Department of Psychological Sciences, University of Liverpool, Liverpool, UK.,Department of General Psychology, University of Padova, Padova, Italy.,
| | | | - Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.,
| |
Collapse
|
39
|
Alexander RG, Mintz RJ, Custodio PJ, Macknik SL, Vaziri A, Venkatakrishnan A, Gindina S, Martinez-Conde S. Gaze mechanisms enabling the detection of faint stars in the night sky. Eur J Neurosci 2021; 54:5357-5367. [PMID: 34160864 DOI: 10.1111/ejn.15335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 05/24/2021] [Accepted: 05/25/2021] [Indexed: 11/26/2022]
Abstract
For millennia, people have used "averted vision" to improve their detection of faint celestial objects, a technique first documented around 325 BCE. Yet, no studies have assessed gaze location during averted vision to determine what pattern best facilitates perception. Here, we characterized averted vision while recording eye-positions of dark-adapted human participants, for the first time. We simulated stars of apparent magnitudes 3.3 and 3.5, matching their brightness to Megrez (the dimmest star in the Big Dipper) and Tau Ceti. Participants indicated whether each star was visible from a series of fixation locations, providing a comprehensive map of detection performance in all directions. Contrary to prior predictions, maximum detection was first achieved at ~8° from the star, much closer to the fovea than expected from rod-cone distributions alone. These findings challenge the assumption of optimal detection at the rod density peak and provide the first systematic assessment of an age-old facet of human vision.
Collapse
Affiliation(s)
| | - Ronald J Mintz
- SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Paul J Custodio
- SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA.,Kavli Neural Systems Institute, The Rockefeller University, New York, NY, USA.,Research Institute of Molecular Pathology, Vienna, Austria
| | | | - Sofya Gindina
- SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | |
Collapse
|
40
|
Hirano M, Kimoto Y, Furuya S. Specialized Somatosensory-Motor Integration Functions in Musicians. Cereb Cortex 2021; 30:1148-1158. [PMID: 31342056 DOI: 10.1093/cercor/bhz154] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2018] [Revised: 06/18/2019] [Accepted: 06/19/2019] [Indexed: 12/15/2022] Open
Abstract
Somatosensory signals play roles in the fine control of dexterous movements through a somatosensory-motor integration mechanism. While skilled individuals are typically characterized by fine-tuned somatosensory functions and dexterous motor skills, it remains unknown whether and in what manner their bridging mechanism, the tactile-motor and proprioceptive-motor integration functions, plastically changes through extensive sensorimotor experiences. Here, we addressed this issue by comparing physiological indices of these functions between pianists and nonmusicians. Both tactile and proprioceptive stimuli to the right index finger inhibited corticospinal excitability measured by a transcranial magnetic stimulation method. However, the tactile and proprioceptive stimuli exerted weaker and stronger inhibitory effects, respectively, on corticospinal excitability in pianists than in nonmusicians. The results of the electroencephalogram measurements revealed no significant group difference in the amplitude of cortical responses to the somatosensory stimuli around the motor and somatosensory cortices, suggesting that the group difference in the inhibitory effects reflects neuroplastic adaptation of the somatosensory-motor integration functions in pianists. Penalized regression analyses further revealed an association between these integration functions and motor performance in the pianists, suggesting that extensive piano practice reorganizes somatosensory-motor integration functions so as to enable fine control of dexterous finger movements during piano performances.
Collapse
Affiliation(s)
- Masato Hirano
- Sony Computer Science Laboratories, Inc., Tokyo 141-0022, Japan.,Sophia University, Tokyo 102-8554, Japan
| | - Yudai Kimoto
- Sony Computer Science Laboratories, Inc., Tokyo 141-0022, Japan.,Sophia University, Tokyo 102-8554, Japan
| | - Shinichi Furuya
- Sony Computer Science Laboratories, Inc., Tokyo 141-0022, Japan.,Sophia University, Tokyo 102-8554, Japan
| |
Collapse
|
41
|
He Q, Gan S. Repetitive measurements prolong the after-effects of transcranial direct current stimulation (tDCS) on crowding. Brain Stimul 2021:S1935-861X(21)00084-X. [PMID: 33901704 DOI: 10.1016/j.brs.2021.04.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 04/19/2021] [Indexed: 10/21/2022] Open
Affiliation(s)
- Qing He
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, 100871, Beijing, China.
| | - Shuoqiu Gan
- Department of Medical Imaging, The First Affiliated Hospital of Xi'an Jiaotong University, 710061, Xi'an, China; The Key Laboratory of Biomedical Information Engineering, Ministry of Education, Department of Biomedical Engineering, School of Life Science and Technology, Xi'an Jiaotong University, 710049, Xi'an, China
| |
Collapse
|
42
|
Accuracy of hand localization is subject-specific and improved without performance feedback. Sci Rep 2020; 10:19188. [PMID: 33154521 PMCID: PMC7645785 DOI: 10.1038/s41598-020-76220-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 10/12/2020] [Indexed: 11/09/2022] Open
Abstract
Accumulating evidence indicates that the spatial error of human's hand localization appears subject-specific. However, whether the idiosyncratic pattern persists across time with good within-subject consistency has not been adequately examined. Here we measured the hand localization map by a Visual-matching task in multiple sessions over 2 days. Interestingly, we found that participants improved their hand localization accuracy when tested repetitively without performance feedback. Importantly, despite the reduction of average error, the spatial pattern of hand localization errors remained idiosyncratic. Based on individuals' hand localization performance, a standard convolutional neural network classifier could identify participants with good accuracy. Moreover, we did not find supporting evidence that participants' baseline hand localization performance could predict their motor performance in a visual Trajectory-matching task even though both tasks require accurate mapping of hand position to visual targets in the same workspace. Using a separate experiment, we not only replicated these findings but also ruled out the possibility that performance feedback during a few familiarization trials caused the observed improvement in hand localization. We conclude that the conventional hand localization test itself, even without feedback, can improve hand localization but leave the idiosyncrasy of hand localization map unchanged.
Collapse
|
43
|
Alexander RG, Waite S, Macknik SL, Martinez-Conde S. What do radiologists look for? Advances and limitations of perceptual learning in radiologic search. J Vis 2020; 20:17. [PMID: 33057623 PMCID: PMC7571277 DOI: 10.1167/jov.20.10.17] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 09/14/2020] [Indexed: 12/31/2022] Open
Abstract
Supported by guidance from training during residency programs, radiologists learn clinically relevant visual features by viewing thousands of medical images. Yet the precise visual features that expert radiologists use in their clinical practice remain unknown. Identifying such features would allow the development of perceptual learning training methods targeted to the optimization of radiology training and the reduction of medical error. Here we review attempts to bridge current gaps in understanding with a focus on computational saliency models that characterize and predict gaze behavior in radiologists. There have been great strides toward the accurate prediction of relevant medical information within images, thereby facilitating the development of novel computer-aided detection and diagnostic tools. In some cases, computational models have achieved equivalent sensitivity to that of radiologists, suggesting that we may be close to identifying the underlying visual representations that radiologists use. However, because the relevant bottom-up features vary across task context and imaging modalities, it will also be necessary to identify relevant top-down factors before perceptual expertise in radiology can be fully understood. Progress along these dimensions will improve the tools available for educating new generations of radiologists, and aid in the detection of medically relevant information, ultimately improving patient health.
Collapse
Affiliation(s)
- Robert G Alexander
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Stephen Waite
- Department of Radiology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Stephen L Macknik
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Susana Martinez-Conde
- Department of Ophthalmology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| |
Collapse
|
44
|
Treviño M. Non-stationary Salience Processing During Perceptual Training in Humans. Neuroscience 2020; 443:59-70. [PMID: 32659341 DOI: 10.1016/j.neuroscience.2020.07.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Revised: 06/29/2020] [Accepted: 07/06/2020] [Indexed: 11/30/2022]
Abstract
Performance in sensory tasks improves with practice. Some theories suggest that the generalization of learning depends on task difficulty. In consequence, most studies have focused on measuring learning specificity, and perceptual impact after training completes. However, how exactly sustained changes in task difficulty influence the learning curves and how this affects the efficiency of perceptual discrimination is not well understood. Here, we adapted a visual task for humans by creating monocular training programs with increasing (SIMinc) and decreasing (SIMdec) stimulus similarities. We found a marked improvement in all participants after 10 days of training, with an almost complete transfer of learning to the untrained eyes. Interestingly, the training paradigms led to drastically different learning curves for the SIMinc and SIMdec groups. The learning curves were best predicted by an associative learning model that allowed stimuli to gain or lose salience depending on how the subject's learned about them. On addition, a non-stationary sequential sampling model that jointly accounts for choice and RT distributions revealed that the SIMinc group led to faster evidence accumulation rate relative to the SIMdec group. Altogether, our results illustrate how different learning trajectories influenced attentional salience processing leading to distinctive stimulus processing efficiencies. This crucial interdependence determines how observers learn to guide their attention towards visual stimuli in search for a decision.
Collapse
Affiliation(s)
- Mario Treviño
- Laboratorio de Plasticidad Cortical y Aprendizaje Perceptual, Instituto de Neurociencias, Universidad de Guadalajara, Guadalajara, Jalisco, Mexico.
| |
Collapse
|
45
|
Nguyen KN, Watanabe T, Andersen GJ. Role of endogenous and exogenous attention in task-relevant visual perceptual learning. PLoS One 2020; 15:e0237912. [PMID: 32857813 PMCID: PMC7454975 DOI: 10.1371/journal.pone.0237912] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Accepted: 08/05/2020] [Indexed: 11/19/2022] Open
Abstract
The present study examined the role of exogenous and endogenous attention in task relevant visual perceptual learning (TR-VPL). VPL performance was assessed by examining the learning to a trained stimulus feature and transfer of learning to an untrained stimulus feature. To assess the differential role of attention in VPL, two types of attentional cues were manipulated; exogenous and endogenous. In order to assess the effectiveness of the attentional cue, the two types of attentional cues were further divided into three cue-validity conditions. Participants were trained, on a novel task, to detect the presence of a complex gabor patch embedded in fixed Gaussian contrast noise while contrast thresholds were varied. The results showed initial differences were found prior to training, and so the magnitude of learning was assessed. Exogenous and endogenous attention were both found to facilitate learning and feature transfer when investigating pre-test and post-test thresholds. However, examination of training data indicate attentional differences; with endogenous attention showing consistently lower contrast thresholds as compared to exogenous attention suggesting greater impact of training with endogenous attention. We conclude that several factors, including the use of stimuli that resulted in rapid learning, may have contributed to the generalization of learning found in the present study.
Collapse
Affiliation(s)
- Kieu Ngoc Nguyen
- Department of Psychology, University of California, Riverside, Riverside, California, United States of America
| | - Takeo Watanabe
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, Rhode Island, United States of America
| | - George John Andersen
- Department of Psychology, University of California, Riverside, Riverside, California, United States of America
| |
Collapse
|
46
|
Abstract
Previous studies have demonstrated a complex relationship between ensemble perception and outlier detection. We presented two array of heterogeneously oriented stimulus bars and different mean orientations and/or a bar with an outlier orientation, asking participants to discriminate the mean orientations or detect the outlier. Perceptual learning was found in every case, with improved performance accuracy and speeded responses. Testing for improved accuracy through cross-task transfer, we found considerable transfer from training outlier detection to mean discrimination performance, and none in the opposite direction. Implicit learning in terms of increased accuracy was not found in either direction when participants performed one task, and the second task's stimulus features were present. Reaction time improvement was found to transfer in all cases. This study adds to the already broad knowledge concerning perceptual learning and cross-task transfer of training effects.
Collapse
Affiliation(s)
- Shaul Hochstein
- ELSC Safra Brain Research Center and Life Sciences Institute, Hebrew University, Jerusalem, Israel
| | - Marina Pavlovskaya
- Lowenstein Rehabilitation Hospital and Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
47
|
Frank SM, Qi A, Ravasio D, Sasaki Y, Rosen EL, Watanabe T. Supervised Learning Occurs in Visual Perceptual Learning of Complex Natural Images. Curr Biol 2020; 30:2995-3000.e3. [PMID: 32502415 DOI: 10.1016/j.cub.2020.05.050] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 04/14/2020] [Accepted: 05/14/2020] [Indexed: 01/13/2023]
Abstract
There have been long-standing debates regarding whether supervised or unsupervised learning mechanisms are involved in visual perceptual learning (VPL) [1-14]. However, these debates have been based on the effects of simple feedback only about response accuracy in detection or discrimination tasks of low-level visual features such as orientation [15-22]. Here, we examined whether the content of response feedback plays a critical role for the acquisition and long-term retention of VPL of complex natural images. We trained three groups of human subjects (n = 72 in total) to better detect "grouped microcalcifications" or "architectural distortion" lesions (referred to as calcification and distortion in the following) in mammograms either with no trial-by-trial feedback, partial trial-by-trial feedback (response correctness only), or detailed trial-by-trial feedback (response correctness and target location). Distortion lesions consist of more complex visual structures than calcification lesions [23-26]. We found that partial feedback is necessary for VPL of calcifications, whereas detailed feedback is required for VPL of distortions. Furthermore, detailed feedback during training is necessary for VPL of distortion and calcification lesions to be retained for 6 months. These results show that although supervised learning is heavily involved in VPL of complex natural images, the extent of supervision for VPL varies across different types of complex natural images. Such differential requirements for VPL to improve the detectability of lesions in mammograms are potentially informative for the professional training of radiologists.
Collapse
Affiliation(s)
- Sebastian M Frank
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912, USA.
| | - Andrea Qi
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912, USA
| | - Daniela Ravasio
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912, USA
| | - Yuka Sasaki
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912, USA
| | - Eric L Rosen
- Stanford University, Department of Radiology, 300 Pasteur Drive, Stanford, CA 94305, USA; University of Colorado Denver, Department of Radiology, 12401 East 17th Avenue, Aurora, CO 80045, USA
| | - Takeo Watanabe
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912, USA.
| |
Collapse
|
48
|
Johnston IA, Ji M, Cochrane A, Demko Z, Robbins JB, Stephenson JW, Green CS. Perceptual Learning of Appendicitis Diagnosis in Radiological Images. J Vis 2020; 20:16. [PMID: 32790849 PMCID: PMC7438669 DOI: 10.1167/jov.20.8.16] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
A sizeable body of work has demonstrated that participants have the capacity to show substantial increases in performance on perceptual tasks given appropriate practice. This has resulted in significant interest in the use of such perceptual learning techniques to positively impact performance in real-world domains where the extraction of perceptual information in the service of guiding decisions is at a premium. Radiological training is one clear example of such a domain. Here we examine a number of basic science questions related to the use of perceptual learning techniques in the context of a radiology-inspired task. On each trial of this task, participants were presented with a single axial slice from a CT image of the abdomen. They were then asked to indicate whether or not the image was consistent with appendicitis. We first demonstrate that, although the task differs in many ways from standard radiological practice, it nonetheless makes use of expert knowledge, as trained radiologists who underwent the task showed high (near ceiling) levels of performance. Then, in a series of four studies we show that (1) performance on this task does improve significantly over a reasonably short period of training (on the scale of a few hours); (2) the learning transfers to previously unseen images and to untrained image orientations; (3) purely correct/incorrect feedback produces weak learning compared to more informative feedback where the spatial position of the appendix is indicated in each image; and (4) there was little benefit seen from purposefully structuring the learning experience by starting with easier images and then moving on to more difficulty images (as compared to simply presenting all images in a random order). The implications for these various findings with respect to the use of perceptual learning techniques as part of radiological training are then discussed.
Collapse
Affiliation(s)
| | - Mohan Ji
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Aaron Cochrane
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Zachary Demko
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Jessica B Robbins
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA
| | - Jason W Stephenson
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA
| | - C Shawn Green
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| |
Collapse
|
49
|
Xie XY, Zhao XN, Yu C. Perceptual learning of motion direction discrimination: Location specificity and the uncertain roles of dorsal and ventral areas. Vision Res 2020; 175:51-57. [PMID: 32707416 DOI: 10.1016/j.visres.2020.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Revised: 06/08/2020] [Accepted: 06/09/2020] [Indexed: 10/23/2022]
Abstract
One interesting observation of perceptual learning is the asymmetric transfer between stimuli at different external noise levels: learning at zero/low noise can transfer significantly to the same stimulus at high noise, but not vice versa. The mechanisms underlying this asymmetric transfer have been investigated by psychophysical, neurophysiological, brain imaging, and computational modeling studies. One study (PNAS 113 (2016) 5724-5729) reported that rTMS stimulations of dorsal and ventral areas impair motion direction discrimination of moving dot stimuli at 40% coherent ("noisy") and 100% coherent (zero-noise) levels, respectively. However, after direction training at 100% coherence, only rTMS stimulation of the ventral cortex is effective, disturbing direction discrimination at both coherence levels. These results were interpreted as learning-induced changes of functional specializations of visual areas. We have concerns with the behavioral data of this study. First, contrary to the report of highly location-specific motion direction learning, our replicating experiment showed substantial learning transfer (e.g., transfer/learning ratio = 81.9%. vs 14.8% at 100% coherence). Second and more importantly, we found complete transfer of direction learning from 40% to 100% coherence, a critical baseline that is missing in this study. The transfer effect suggests that similar brain mechanisms underlie motion direction processing at two coherence levels. Therefore, this study's conclusions regarding the roles of dorsal and ventral areas in motion direction processing at two coherence levels, as well as the effects of perceptual learning, are not supported by proper experimental evidence. It remains unexplained why distinct impacts of dorsal and ventral rTMS stimulations on motion direction discrimination were observed.
Collapse
Affiliation(s)
- Xin-Yu Xie
- School of Psychology, IDG/McGovern Institute for Brain Research, and Peking-Tsinghua Center for Life Sciences, Peking University, China
| | - Xing-Nan Zhao
- School of Psychology, IDG/McGovern Institute for Brain Research, and Peking-Tsinghua Center for Life Sciences, Peking University, China
| | - Cong Yu
- School of Psychology, IDG/McGovern Institute for Brain Research, and Peking-Tsinghua Center for Life Sciences, Peking University, China.
| |
Collapse
|
50
|
Haris EM, McGraw PV, Webb BS, Chung STL, Astle AT. The Effect of Perceptual Learning on Face Recognition in Individuals with Central Vision Loss. Invest Ophthalmol Vis Sci 2020; 61:2. [PMID: 32609296 PMCID: PMC7425703 DOI: 10.1167/iovs.61.8.2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
Purpose To examine whether perceptual learning can improve face discrimination and recognition in older adults with central vision loss. Methods Ten participants with age-related macular degeneration (ARMD) received 5 days of training on a face discrimination task (mean age, 78 ± 10 years). We measured the magnitude of improvements (i.e., a reduction in threshold size at which faces were able to be discriminated) and whether they generalized to an untrained face recognition task. Measurements of visual acuity, fixation stability, and preferred retinal locus were taken before and after training to contextualize learning-related effects. The performance of the ARMD training group was compared to nine untrained age-matched controls (8 = ARMD, 1 = juvenile macular degeneration; mean age, 77 ± 10 years). Results Perceptual learning on the face discrimination task reduced the threshold size for face discrimination performance in the trained group, with a mean change (SD) of –32.7% (+15.9%). The threshold for performance on the face recognition task was also reduced, with a mean change (SD) of –22.4% (+2.31%). These changes were independent of changes in visual acuity, fixation stability, or preferred retinal locus. Untrained participants showed no statistically significant reduction in threshold size for face discrimination, with a mean change (SD) of –8.3% (+10.1%), or face recognition, with a mean change (SD) of +2.36% (–5.12%). Conclusions This study shows that face discrimination and recognition can be reliably improved in ARMD using perceptual learning. The benefits point to considerable perceptual plasticity in higher-level cortical areas involved in face-processing. This novel finding highlights that a key visual difficulty in those suffering from ARMD is readily amenable to rehabilitation.
Collapse
|