1
|
Lu ZL, Yang S, Dosher BA. Hierarchical Bayesian augmented Hebbian reweighting model of perceptual learning. J Vis 2025; 25:9. [PMID: 40238135 PMCID: PMC12011130 DOI: 10.1167/jov.25.4.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Accepted: 03/05/2025] [Indexed: 04/18/2025] Open
Abstract
The augmented Hebbian reweighting model (AHRM) has proven effective in modeling the collective performance of observers in perceptual learning studies. In this work, we introduce a novel hierarchical Bayesian version of the AHRM (HB-AHRM), which allows us to model the learning curves of individual participants and the entire population within a unified framework. We compare the performance of HB-AHRM with that of a Bayesian inference procedure, which independently estimates posterior distributions of model parameters for each participant without using a hierarchical structure. To address the substantial computational challenges, we propose a method for approximating the likelihood function in the AHRM through feature engineering and linear regression, increasing the speed of the estimation process by a factor of 20,000. This enhancement enables the HB-AHRM to compute the posterior distributions of hyperparameters and model parameters at the population, subject, and test levels, facilitating statistical inferences across these layers. Although developed in the context of a single experiment, the HB-AHRM and its associated methods are broadly applicable to data from various perceptual learning studies, offering predictions of human performance at both individual and population levels. Furthermore, the approximated likelihood approach may prove useful in fitting other stochastic models that lack analytic solutions.
Collapse
Affiliation(s)
- Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China
- Center for Neural Science and Department of Psychology, New York University, New York, USA
- NYU-ECNU Institute of Brain and Cognitive Science, Shanghai, China
- https://orcid.org/0000-0002-7295-727X
| | - Shanglin Yang
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China
- Cognitive Sciences Department, University of California, Irvine, CA, USA
| | | |
Collapse
|
2
|
Yeh MS, Li T, Huang J, Liu Z. Comparing conventional and action video game training in visual perceptual learning. Sci Rep 2024; 14:27864. [PMID: 39537636 PMCID: PMC11561280 DOI: 10.1038/s41598-024-71987-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 08/30/2024] [Indexed: 11/16/2024] Open
Abstract
Action video game (AVG) playing has been found to transfer to a variety of laboratory tasks in visual cognition. More recently, it has even been found to transfer to low-level visual "psychophysics tasks. This is unexpected since such low-level tasks have traditionally been found to be largely "immune" to transfer from another task, or even from the same task but a different stimulus attribute, e.g., motion direction. In this study, we set out to directly quantify transfer efficiency from AVG training to motion discrimination. Participants (n = 65) trained for 20 h on either a first-person active shooting video game, or a motion direction discrimination task with random dots. They were tested before, midway, and after training with the same motion task and an orientation discrimination task that had been shown to receive transfer from AVG training, but not from motion training. A subsequent control group (n = 18) was recruited to rule out any test-retest effect, by taking the same tests with the same time intervals, but without training. We found that improvement in motion discrimination performance was comparable between the AVG training and control groups, and less than the motion discrimination training group. We could not replicate the AVG transfer to orientation discrimination, but this was likely due to the fact that our participants were practically at chance for this task at all test points. Our study found no evidence, in either accuracy or reaction time, that AVG training transferred to motion discrimination. Overall, our results suggest that AVG training transferred little to lower-level visual skills, refining understanding of the mechanisms by which AVGs may affect vision. Protocol registration The accepted stage 1 protocol for this study can be found on the Open Science Framework at https://osf.io/zdv9c/?view_only=5b3b0c161dad448d9d1d8b14ce91ab11 . The stage 1 protocol for this Registered Report was accepted in principle on 01/12/22. The protocol, as accepted by the journal, can be found at: https://doi.org/10.17605/OSF.IO/ZDV9C.
Collapse
Affiliation(s)
- Maggie S Yeh
- Department of Psychology, University of California Los Angeles, Los Angeles, CA, USA
| | - Tan Li
- Department of Psychology, Hebei Normal University, Shijiazhuang, China
| | - Jinfeng Huang
- Department of Psychology, Hebei Normal University, Shijiazhuang, China.
| | - Zili Liu
- Department of Psychology, University of California Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
3
|
Bröker F, Holt LL, Roads BD, Dayan P, Love BC. Demystifying unsupervised learning: how it helps and hurts. Trends Cogn Sci 2024; 28:974-986. [PMID: 39353836 DOI: 10.1016/j.tics.2024.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 09/06/2024] [Accepted: 09/09/2024] [Indexed: 10/04/2024]
Abstract
Humans and machines rarely have access to explicit external feedback or supervision, yet manage to learn. Most modern machine learning systems succeed because they benefit from unsupervised data. Humans are also expected to benefit and yet, mysteriously, empirical results are mixed. Does unsupervised learning help humans or not? Here, we argue that the mixed results are not conflicting answers to this question, but reflect that humans self-reinforce their predictions in the absence of supervision, which can help or hurt depending on whether predictions and task align. We use this framework to synthesize empirical results across various domains to clarify when unsupervised learning will help or hurt. This provides new insights into the fundamentals of learning with implications for instruction and lifelong learning.
Collapse
Affiliation(s)
- Franziska Bröker
- Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Tübingen, Germany; Gatsby Computational Neuroscience Unit, University College London, London, UK; Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA; Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Lori L Holt
- Department of Psychology, University of Texas at Austin, Austin, TX, US
| | - Brett D Roads
- Department of Experimental Psychology, University College London, London, UK
| | - Peter Dayan
- Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Tübingen, Germany; University of Tübingen, Tübingen, Germany
| | - Bradley C Love
- Department of Experimental Psychology, University College London, London, UK
| |
Collapse
|
4
|
Lu ZL, Yang S, Dosher B. Hierarchical Bayesian Augmented Hebbian Reweighting Model of Perceptual Learning. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.08.606902. [PMID: 39149245 PMCID: PMC11326272 DOI: 10.1101/2024.08.08.606902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 08/17/2024]
Abstract
The Augmented Hebbian Reweighting Model (AHRM) has been effectively utilized to model the collective performance of observers in various perceptual learning studies. In this work, we have introduced a novel hierarchical Bayesian Augmented Hebbian Reweighting Model (HB-AHRM) to simultaneously model the learning curves of individual participants and the entire population within a single framework. We have compared its performance to that of a Bayesian Inference Procedure (BIP), which independently estimates the posterior distributions of model parameters for each individual subject without employing a hierarchical structure. To cope with the substantial computational demands, we developed an approach to approximate the likelihood function in the AHRM with feature engineering and linear regression, increasing the speed of the estimation procedure by 20,000 times. The HB-AHRM has enabled us to compute the joint posterior distribution of hyperparameters and parameters at the population, observer, and test levels, facilitating statistical inferences across these levels. While we have developed this methodology within the context of a single experiment, the HB-AHRM and the associated modeling techniques can be readily applied to analyze data from various perceptual learning experiments and provide predictions of human performance at both the population and individual levels. The likelihood approximation concept introduced in this study may have broader utility in fitting other stochastic models lacking analytic forms.
Collapse
Affiliation(s)
- Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China; Center for Neural Science and Department of Psychology, New York University, New York, USA; NYU-ECNU Institute of Brain and Cognitive Science, Shanghai, China
| | - Shanglin Yang
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China
| | - Barbara Dosher
- Cognitive Sciences Department, University of California, Irvine, CA 92697-5100, USA
| |
Collapse
|
5
|
Tamaki M, Yamada T, Barnes-Diana T, Wang Z, Watanabe T, Sasaki Y. First-night effect reduces the beneficial effects of sleep on visual plasticity and modifies the underlying neurochemical processes. Sci Rep 2024; 14:14388. [PMID: 38909129 PMCID: PMC11193735 DOI: 10.1038/s41598-024-64091-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 06/05/2024] [Indexed: 06/24/2024] Open
Abstract
Individuals experience difficulty falling asleep in a new environment, termed the first night effect (FNE). However, the impact of the FNE on sleep-induced brain plasticity remains unclear. Here, using a within-subject design, we found that the FNE significantly reduces visual plasticity during sleep in young adults. Sleep-onset latency (SOL), an indicator of the FNE, was significantly longer during the first sleep session than the second session, confirming the FNE. We assessed performance gains in visual perceptual learning after sleep and increases in the excitatory-to-inhibitory neurotransmitter (E/I) ratio in early visual areas during sleep using magnetic resonance spectroscopy and polysomnography. These parameters were significantly smaller in sleep with the FNE than in sleep without the FNE; however, these parameters were not correlated with SOL. These results suggest that while the neural mechanisms of the FNE and brain plasticity are independent, sleep disturbances temporarily block the neurochemical process fundamental for brain plasticity.
Collapse
Affiliation(s)
- Masako Tamaki
- Cognitive Somnology RIKEN Hakubi Research Team, RIKEN Cluster for Pioneering Research, Saitama, 351-0106, Japan
- RIKEN Center for Brain Science, Saitama, 351-0106, Japan
| | - Takashi Yamada
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, 190 Thayer Street, 1821, Providence, RI, 02912, USA
| | - Tyler Barnes-Diana
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, 190 Thayer Street, 1821, Providence, RI, 02912, USA
| | - Zhiyan Wang
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, 190 Thayer Street, 1821, Providence, RI, 02912, USA
| | - Takeo Watanabe
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, 190 Thayer Street, 1821, Providence, RI, 02912, USA
| | - Yuka Sasaki
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, 190 Thayer Street, 1821, Providence, RI, 02912, USA.
| |
Collapse
|
6
|
Tamaki M, Yamada T, Barnes-Diana T, Wang Z, Watanabe T, Sasaki Y. First-night effect reduces the beneficial effects of sleep on visual plasticity and modifies the underlying neurochemical processes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.21.576529. [PMID: 38328250 PMCID: PMC10849493 DOI: 10.1101/2024.01.21.576529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2024]
Abstract
Individuals experience difficulty falling asleep in a new environment, termed the first night effect (FNE). However, the impact of the FNE on sleep-induced brain plasticity remains unclear. Here, using a within-subject design, we found that the FNE significantly reduces visual plasticity during sleep in young adults. Sleep-onset latency (SOL), an indicator of the FNE, was significantly longer during the first sleep session than the second session, confirming the FNE. We assessed performance gains in visual perceptual learning after sleep and increases in the excitatory-to-inhibitory neurotransmitter (E/I) ratio in early visual areas during sleep using magnetic resonance spectroscopy and polysomnography. These parameters were significantly smaller in sleep with the FNE than in sleep without the FNE; however, these parameters were not correlated with SOL. These results suggest that while the neural mechanisms of the FNE and brain plasticity are independent, sleep disturbances temporarily block the neurochemical process fundamental for brain plasticity.
Collapse
|
7
|
Liu J, Lu ZL, Dosher B. Informational feedback accelerates learning in multi-alternative perceptual judgements of orientation. Vision Res 2023; 213:108318. [PMID: 37742454 DOI: 10.1016/j.visres.2023.108318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 09/11/2023] [Accepted: 09/11/2023] [Indexed: 09/26/2023]
Abstract
Experience or training can substantially improve perceptual performance through perceptual learning, and the extent and rate of these improvements may be affected by feedback. In this paper, we first developed a neural network model based on the integrated reweighting theory (Dosher et al., 2013) to account for perceptual learning and performance in n-alternative identification tasks and the dependence of learning on different forms of feedback. We then report an experiment comparing the effectiveness of response feedback (RF) versus accuracy feedback (AF) or no feedback (NF) (full versus partial versus no supervision) in learning a challenging eight-alternative visual orientation identification (8AFC) task. Although learning sometimes occurred in the absence of feedback (NF), RF had a clear advantage above AF or NF in this task. Using hybrid supervision learning rules, a new n-alternative identification integrated reweighting theory (I-IRT) explained both the differences in learning curves given different feedback and the dynamic changes in identification confusion data. This study shows that training with more informational feedback (RF) is more effective, though not necessary, in these challenging n-alternative tasks, a result that has implications for developing training paradigms in realistic tasks.
Collapse
Affiliation(s)
- Jiajuan Liu
- Cognitive Sciences Department, University of California, Irvine, CA 92697-5100, USA.
| | - Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China; Center for Neural Science and Department of Psychology, New York University, New York, USA; NYU-ECNU Institute of Brain and Cognitive Science, Shanghai, China
| | - Barbara Dosher
- Cognitive Sciences Department, University of California, Irvine, CA 92697-5100, USA.
| |
Collapse
|
8
|
Lu ZL, Dosher BA. Current directions in visual perceptual learning. NATURE REVIEWS PSYCHOLOGY 2022; 1:654-668. [PMID: 37274562 PMCID: PMC10237053 DOI: 10.1038/s44159-022-00107-2] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/16/2022] [Indexed: 06/06/2023]
Abstract
The visual expertise of adult humans is jointly determined by evolution, visual development, and visual perceptual learning. Perceptual learning refers to performance improvements in perceptual tasks after practice or training in the task. It occurs in almost all visual tasks, ranging from simple feature detection to complex scene analysis. In this Review, we focus on key behavioral aspects of visual perceptual learning. We begin by describing visual perceptual learning tasks and manipulations that influence the magnitude of learning, and then discuss specificity of learning. Next, we present theories and computational models of learning and specificity. We then review applications of visual perceptual learning in visual rehabilitation. Finally, we summarize the general principles of visual perceptual learning, discuss the tension between plasticity and stability, and conclude with new research directions.
Collapse
Affiliation(s)
- Zhong-Lin Lu
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China
- Center for Neural Science, New York University, New York, NY, USA
- Department of Psychology, New York University, New York, NY, USA
- Institute of Brain and Cognitive Science, New York University - East China Normal University, Shanghai, China
| | | |
Collapse
|
9
|
Mechanisms of Surround Suppression Effect on the Contrast Sensitivity of V1 Neurons in Cats. Neural Plast 2022; 2022:5677655. [PMID: 35299618 PMCID: PMC8923783 DOI: 10.1155/2022/5677655] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 01/30/2022] [Accepted: 02/19/2022] [Indexed: 12/14/2022] Open
Abstract
Surround suppression (SS) is a phenomenon that a neuron’s response to visual stimuli within the classical receptive field (cRF) is suppressed by a concurrent stimulation in the surrounding receptive field (sRF) beyond the cRF. Studies show that SS affects neuronal response contrast sensitivity in the primary visual cortex (V1). However, the underlying mechanisms remain unclear. Here, we examined SS effect on the contrast sensitivity of cats’ V1 neurons with different preferred SFs using external noise-masked visual stimuli and perceptual template model (PTM) analysis at the system level. The contrast sensitivity was evaluated by the inverted threshold contrast of neurons in response to circular gratings of different contrasts in the cRF with or without an annular grating in the sRF. Our results showed that SS significantly reduced the contrast sensitivity of cats’ V1 neurons. The SS-induced reduction of contrast sensitivity was not correlated with SS strength but was dependent on neuron’s preferred SF, with a larger reduction for neurons with low preferred SFs than those with high preferred SFs. PTM analysis of threshold versus external noise contrast (TvC) functions indicated that SS decreased contrast sensitivity by increasing both the internal additive noise and impact of external noise for neurons with low preferred SFs, but improving only internal additive noise for neurons with high preferred SFs. Furthermore, the SS effect on the contrast-response function of low- and high-SF neurons also exhibited different mechanisms in contrast gain and response gain. Collectively, these results suggest that the mechanisms of SS effect on neuronal contrast sensitivity may depend on neuronal populations with different SFs.
Collapse
|
10
|
Saionz EL, Busza A, Huxlin KR. Rehabilitation of visual perception in cortical blindness. HANDBOOK OF CLINICAL NEUROLOGY 2022; 184:357-373. [PMID: 35034749 PMCID: PMC9682408 DOI: 10.1016/b978-0-12-819410-2.00030-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Blindness is a common sequela after stroke affecting the primary visual cortex, presenting as a contralesional, homonymous, visual field cut. This can occur unilaterally or, less commonly, bilaterally. While it has been widely assumed that after a brief period of spontaneous improvement, vision loss becomes stable and permanent, accumulating data show that visual training can recover some of the vision loss, even long after the stroke. Here, we review the different approaches to rehabilitation employed in adult-onset cortical blindness (CB), focusing on visual restoration methods. Most of this work was conducted in chronic stroke patients, partially restoring visual discrimination and luminance detection. However, to achieve this, patients had to train for extended periods (usually many months), and the vision restored was not entirely normal. Several adjuvants to training such as noninvasive, transcranial brain stimulation, and pharmacology are starting to be investigated for their potential to increase the efficacy of training in CB patients. However, these approaches are still exploratory and require considerably more research before being adopted. Nonetheless, having established that the adult visual system retains the capacity for restorative plasticity, attention recently turned toward the subacute poststroke period. Drawing inspiration from sensorimotor stroke rehabilitation, visual training was recently attempted for the first time in subacute poststroke patients. It improved vision faster, over larger portions of the blind field, and for a larger number of visual discrimination abilities than identical training initiated more than 6 months poststroke (i.e., in the chronic period). In conclusion, evidence now suggests that visual neuroplasticity after occipital stroke can be reliably recruited by a range of visual training approaches. In addition, it appears that poststroke visual plasticity is dynamic, with a critical window of opportunity in the early postdamage period to attain more rapid, more extensive recovery of a larger set of visual perceptual abilities.
Collapse
Affiliation(s)
- Elizabeth L Saionz
- Medical Scientist Training Program, University of Rochester, Rochester, NY, United States
| | - Ania Busza
- Department of Neurology, University of Rochester, Rochester, NY, United States
| | - Krystel R Huxlin
- Flaum Eye Institute, University of Rochester, Rochester, NY, United States.
| |
Collapse
|
11
|
Hung SC, Carrasco M. Feature-based attention enables robust, long-lasting location transfer in human perceptual learning. Sci Rep 2021; 11:13914. [PMID: 34230522 PMCID: PMC8260789 DOI: 10.1038/s41598-021-93016-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 04/29/2021] [Indexed: 11/14/2022] Open
Abstract
Visual perceptual learning (VPL) is typically specific to the trained location and feature. However, the degree of specificity depends upon particular training protocols. Manipulating covert spatial attention during training facilitates learning transfer to other locations. Here we investigated whether feature-based attention (FBA), which enhances the representation of particular features throughout the visual field, facilitates VPL transfer, and how long such an effect would last. To do so, we implemented a novel task in which observers discriminated a stimulus orientation relative to two reference angles presented simultaneously before each block. We found that training with FBA enabled remarkable location transfer, reminiscent of its global effect across the visual field, but preserved orientation specificity in VPL. Critically, both the perceptual improvement and location transfer persisted after 1 year. Our results reveal robust, long-lasting benefits induced by FBA in VPL, and have translational implications for improving generalization of training protocols in visual rehabilitation.
Collapse
Affiliation(s)
- Shao-Chin Hung
- Department of Psychology, New York University, New York, NY, USA
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, NY, USA. .,Center for Neural Science, New York University, New York, NY, USA.
| |
Collapse
|
12
|
Abstract
Sensory systems often suppress self-generated sensations in order to discriminate them from those arising in the environment. The suppression of visual sensitivity during rapid eye movements is well established, and although functionally beneficial most of the time, it can limit the performance of certain tasks. Here, we show that with repeated practice, mechanisms that suppress visual signals during eye movements can be modified. People trained to detect brief visual patterns learn to turn off suppression around the expected time of the target. These findings demonstrate an elegant form of plasticity, capable of improving the visibility of behaviorally relevant stimuli without compromising the wider functional benefits of suppression. Perceptual stability is facilitated by a decrease in visual sensitivity during rapid eye movements, called saccadic suppression. While a large body of evidence demonstrates that saccadic programming is plastic, little is known about whether the perceptual consequences of saccades can be modified. Here, we demonstrate that saccadic suppression is attenuated during learning on a standard visual detection-in-noise task, to the point that it is effectively silenced. Across a period of 7 days, 44 participants were trained to detect brief, low-contrast stimuli embedded within dynamic noise, while eye position was tracked. Although instructed to fixate, participants regularly made small fixational saccades. Data were accumulated over a large number of trials, allowing us to assess changes in performance as a function of the temporal proximity of stimuli and saccades. This analysis revealed that improvements in sensitivity over the training period were accompanied by a systematic change in the impact of saccades on performance—robust saccadic suppression on day 1 declined gradually over subsequent days until its magnitude became indistinguishable from zero. This silencing of suppression was not explained by learning-related changes in saccade characteristics and generalized to an untrained retinal location and stimulus orientation. Suppression was restored when learned stimulus timing was perturbed, consistent with the operation of a mechanism that temporarily reduces or eliminates saccadic suppression, but only when it is behaviorally advantageous to do so. Our results indicate that learning can circumvent saccadic suppression to improve performance, without compromising its functional benefits in other viewing contexts.
Collapse
|
13
|
Rennie JP, Jones J, Astle DE. Training-dependent transfer within a set of nested tasks. Q J Exp Psychol (Hove) 2021; 74:1327-1343. [PMID: 33535924 PMCID: PMC7614448 DOI: 10.1177/1747021821993772] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Extended practice on a particular cognitive task can boost the performance of other tasks, even though they themselves have not been practised. This transfer of benefits appears to be specific, occurring most when tasks are very similar to those being trained. But what type of similarity is most important for predicting transfer? This question is addressed with a tightly controlled randomised design, with a relatively large sample (N = 175) and an adaptive control group. We created a hierarchical set of nested assessment tasks. Participants then trained on two of the tasks: one was relatively "low" in the hierarchy requiring just simultaneous judgements of shapes' spikiness, whereas the other was relatively "high" requiring delayed judgements of shapes' spikiness or number of spikes in a switching paradigm. Using the full complement of nested tasks before and after training, we could then test whether and how these "low" and "high" training effects cascade through the hierarchy. For both training groups, relative to the control, whether or not an assessment task shared a single specific feature was the best predictor of transfer patterns. For the low-level training group, the overall proportion of feature overlap also significantly predicted transfer, but the same was not true for the high-level training group. Finally, pre-training between-task correlations were not predictive of the pattern of transfer for either group. Together these findings provide an experimental exploration of the specificity of transfer and establish the nature of task overlap that is crucial for the transfer of performance improvements.
Collapse
Affiliation(s)
- Joseph P Rennie
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Jonathan Jones
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Duncan E Astle
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| |
Collapse
|
14
|
Asher JM, Hibbard PB. No effect of feedback, level of processing or stimulus presentation protocol on perceptual learning when easy and difficult trials are interleaved. Vision Res 2020; 176:100-117. [DOI: 10.1016/j.visres.2020.07.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Revised: 07/26/2020] [Accepted: 07/29/2020] [Indexed: 11/24/2022]
|
15
|
Frank SM, Qi A, Ravasio D, Sasaki Y, Rosen EL, Watanabe T. Supervised Learning Occurs in Visual Perceptual Learning of Complex Natural Images. Curr Biol 2020; 30:2995-3000.e3. [PMID: 32502415 DOI: 10.1016/j.cub.2020.05.050] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 04/14/2020] [Accepted: 05/14/2020] [Indexed: 01/13/2023]
Abstract
There have been long-standing debates regarding whether supervised or unsupervised learning mechanisms are involved in visual perceptual learning (VPL) [1-14]. However, these debates have been based on the effects of simple feedback only about response accuracy in detection or discrimination tasks of low-level visual features such as orientation [15-22]. Here, we examined whether the content of response feedback plays a critical role for the acquisition and long-term retention of VPL of complex natural images. We trained three groups of human subjects (n = 72 in total) to better detect "grouped microcalcifications" or "architectural distortion" lesions (referred to as calcification and distortion in the following) in mammograms either with no trial-by-trial feedback, partial trial-by-trial feedback (response correctness only), or detailed trial-by-trial feedback (response correctness and target location). Distortion lesions consist of more complex visual structures than calcification lesions [23-26]. We found that partial feedback is necessary for VPL of calcifications, whereas detailed feedback is required for VPL of distortions. Furthermore, detailed feedback during training is necessary for VPL of distortion and calcification lesions to be retained for 6 months. These results show that although supervised learning is heavily involved in VPL of complex natural images, the extent of supervision for VPL varies across different types of complex natural images. Such differential requirements for VPL to improve the detectability of lesions in mammograms are potentially informative for the professional training of radiologists.
Collapse
Affiliation(s)
- Sebastian M Frank
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912, USA.
| | - Andrea Qi
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912, USA
| | - Daniela Ravasio
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912, USA
| | - Yuka Sasaki
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912, USA
| | - Eric L Rosen
- Stanford University, Department of Radiology, 300 Pasteur Drive, Stanford, CA 94305, USA; University of Colorado Denver, Department of Radiology, 12401 East 17th Avenue, Aurora, CO 80045, USA
| | - Takeo Watanabe
- Brown University, Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street, Providence, RI 02912, USA.
| |
Collapse
|
16
|
Johnston IA, Ji M, Cochrane A, Demko Z, Robbins JB, Stephenson JW, Green CS. Perceptual Learning of Appendicitis Diagnosis in Radiological Images. J Vis 2020; 20:16. [PMID: 32790849 PMCID: PMC7438669 DOI: 10.1167/jov.20.8.16] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
A sizeable body of work has demonstrated that participants have the capacity to show substantial increases in performance on perceptual tasks given appropriate practice. This has resulted in significant interest in the use of such perceptual learning techniques to positively impact performance in real-world domains where the extraction of perceptual information in the service of guiding decisions is at a premium. Radiological training is one clear example of such a domain. Here we examine a number of basic science questions related to the use of perceptual learning techniques in the context of a radiology-inspired task. On each trial of this task, participants were presented with a single axial slice from a CT image of the abdomen. They were then asked to indicate whether or not the image was consistent with appendicitis. We first demonstrate that, although the task differs in many ways from standard radiological practice, it nonetheless makes use of expert knowledge, as trained radiologists who underwent the task showed high (near ceiling) levels of performance. Then, in a series of four studies we show that (1) performance on this task does improve significantly over a reasonably short period of training (on the scale of a few hours); (2) the learning transfers to previously unseen images and to untrained image orientations; (3) purely correct/incorrect feedback produces weak learning compared to more informative feedback where the spatial position of the appendix is indicated in each image; and (4) there was little benefit seen from purposefully structuring the learning experience by starting with easier images and then moving on to more difficulty images (as compared to simply presenting all images in a random order). The implications for these various findings with respect to the use of perceptual learning techniques as part of radiological training are then discussed.
Collapse
Affiliation(s)
| | - Mohan Ji
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Aaron Cochrane
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Zachary Demko
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Jessica B Robbins
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA
| | - Jason W Stephenson
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA
| | - C Shawn Green
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| |
Collapse
|
17
|
Tamaki M, Wang Z, Barnes-Diana T, Guo D, Berard AV, Walsh E, Watanabe T, Sasaki Y. Complementary contributions of non-REM and REM sleep to visual learning. Nat Neurosci 2020; 23:1150-1156. [PMID: 32690968 PMCID: PMC7483793 DOI: 10.1038/s41593-020-0666-y] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2019] [Accepted: 06/11/2020] [Indexed: 02/07/2023]
Abstract
Sleep is beneficial for learning. However, it remains unclear whether learning is facilitated by non-REM (NREM) sleep or by REM sleep, whether it results from plasticity increases or stabilization, and whether facilitation results from learning-specific processing. Here, we trained volunteers on a visual task, and measured the excitatory and inhibitory (E/I) balance in early visual areas during subsequent sleep as an index of plasticity. E/I balance increased during NREM sleep irrespective of whether pre-sleep learning occurred, but it was associated with post-sleep performance gains relative to pre-sleep performance. By contrast, E/I balance decreased during REM sleep but only after pre-sleep training, and the decrease was associated with stabilization of pre-sleep learning. These findings indicate that NREM sleep promotes plasticity, leading to performance gains independent of learning, while REM sleep decreases plasticity to stabilize learning in a learning-specific manner.
Collapse
Affiliation(s)
- Masako Tamaki
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.,National Institute of Occupational Safety and Health, Kawasaki, Japan
| | - Zhiyan Wang
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| | - Tyler Barnes-Diana
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| | - DeeAnn Guo
- Department of Neuroscience, Brown University, Providence, RI, USA
| | - Aaron V Berard
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| | - Edward Walsh
- Department of Neuroscience, Brown University, Providence, RI, USA
| | - Takeo Watanabe
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| | - Yuka Sasaki
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA.
| |
Collapse
|
18
|
Dosher BA, Liu J, Chu W, Lu ZL. Roving: The causes of interference and re-enabled learning in multi-task visual training. J Vis 2020; 20:9. [PMID: 32543649 PMCID: PMC7416889 DOI: 10.1167/jov.20.6.9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Accepted: 03/10/2020] [Indexed: 11/24/2022] Open
Abstract
People routinely perform multiple visual judgments in the real world, yet, intermixing tasks or task variants during training can damage or even prevent learning. This paper explores why. We challenged theories of visual perceptual learning focused on plastic retuning of low-level retinotopic cortical representations by placing different task variants in different retinal locations, and tested theories of perceptual learning through reweighting (changes in readout) by varying task similarity. Discriminating different (but equivalent) and similar orientations in separate retinal locations interfered with learning, whereas training either with identical orientations or sufficiently different ones in different locations released rapid learning. This location crosstalk during learning renders it unlikely that the primary substrate of learning is retuning in early retinotopic visual areas; instead, learning likely involves reweighting from location-independent representations to a decision. We developed an Integrated Reweighting Theory (IRT), which has both V1-like location-specific representations and higher level (V4/IT or higher) location-invariant representations, and learns via reweighting the readout to decision, to predict the order of learning rates in different conditions. This model with suitable parameters successfully fit the behavioral data, as well as some microstructure of learning performance in a new trial-by-trial analysis.
Collapse
Affiliation(s)
- Barbara Anne Dosher
- Cognitive Science Department, University of California, Irvine, Irvine, CA, USA
| | - Jiajuan Liu
- Cognitive Science Department, University of California, Irvine, Irvine, CA, USA
| | - Wilson Chu
- Cognitive Science Department, University of California, Irvine, Irvine, CA, USA
- Department of Psychology, Los Angeles Valley College, Valley Glen, CA, USA
| | - Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China; Center for Neural Sciences and Department of Psychology, New York University, New York, NY, USA
| |
Collapse
|
19
|
Saionz EL, Tadin D, Melnick MD, Huxlin KR. Functional preservation and enhanced capacity for visual restoration in subacute occipital stroke. Brain 2020; 143:1857-1872. [PMID: 32428211 PMCID: PMC7296857 DOI: 10.1093/brain/awaa128] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Revised: 01/30/2020] [Accepted: 03/01/2020] [Indexed: 01/18/2023] Open
Abstract
Stroke damage to the primary visual cortex (V1) causes a loss of vision known as hemianopia or cortically-induced blindness. While perimetric visual field improvements can occur spontaneously in the first few months post-stroke, by 6 months post-stroke, the deficit is considered chronic and permanent. Despite evidence from sensorimotor stroke showing that early injury responses heighten neuroplastic potential, to date, visual rehabilitation research has focused on patients with chronic cortically-induced blindness. Consequently, little is known about the functional properties of the post-stroke visual system in the subacute period, nor do we know if these properties can be harnessed to enhance visual recovery. Here, for the first time, we show that 'conscious' visual discrimination abilities are often preserved inside subacute, perimetrically-defined blind fields, but they disappear by ∼6 months post-stroke. Complementing this discovery, we now show that training initiated subacutely can recover global motion discrimination and integration, as well as luminance detection perimetry, just as it does in chronic cortically-induced blindness. However, subacute recovery was attained six times faster; it also generalized to deeper, untrained regions of the blind field, and to other (untrained) aspects of motion perception, preventing their degradation upon reaching the chronic period. In contrast, untrained subacutes exhibited spontaneous improvements in luminance detection perimetry, but spontaneous recovery of motion discriminations was never observed. Thus, in cortically-induced blindness, the early post-stroke period appears characterized by gradual-rather than sudden-loss of visual processing. Subacute training stops this degradation, and is far more efficient at eliciting recovery than identical training in the chronic period. Finally, spontaneous visual improvements in subacutes were restricted to luminance detection; discrimination abilities only recovered following deliberate training. Our findings suggest that after V1 damage, rather than waiting for vision to stabilize, early training interventions may be key to maximize the system's potential for recovery.
Collapse
Affiliation(s)
- Elizabeth L Saionz
- Flaum Eye Institute, University of Rochester, Rochester, NY, USA
- Medical Scientist Training Program, University of Rochester, Rochester, NY, USA
| | - Duje Tadin
- Flaum Eye Institute, University of Rochester, Rochester, NY, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Michael D Melnick
- Flaum Eye Institute, University of Rochester, Rochester, NY, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Krystel R Huxlin
- Flaum Eye Institute, University of Rochester, Rochester, NY, USA
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|
20
|
Donovan I, Shen A, Tortarolo C, Barbot A, Carrasco M. Exogenous attention facilitates perceptual learning in visual acuity to untrained stimulus locations and features. J Vis 2020; 20:18. [PMID: 32340029 PMCID: PMC7405812 DOI: 10.1167/jov.20.4.18] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Accepted: 01/08/2020] [Indexed: 12/11/2022] Open
Abstract
Visual perceptual learning (VPL) refers to the improvement in performance on a visual task due to practice. A hallmark of VPL is specificity, as improvements are often confined to the trained retinal locations or stimulus features. We have previously found that exogenous (involuntary, stimulus-driven) and endogenous (voluntary, goal-driven) spatial attention can facilitate the transfer of VPL across locations in orientation discrimination tasks mediated by contrast sensitivity. Here, we investigated whether exogenous spatial attention can facilitate such transfer in acuity tasks that have been associated with higher specificity. We trained observers for 3 days (days 2-4) in a Landolt acuity task (Experiment 1) or a Vernier hyperacuity task (Experiment 2), with either exogenous precues (attention group) or neutral precues (neutral group). Importantly, during pre-tests (day 1) and post-tests (day 5), all observers were tested with neutral precues; thus, groups differed only in their attentional allocation during training. For the Landolt acuity task, we found evidence of location transfer in both the neutral and attention groups, suggesting weak location specificity of VPL. For the Vernier hyperacuity task, we found evidence of location and feature specificity in the neutral group, and learning transfer in the attention group-similar improvement at trained and untrained locations and features. Our results reveal that, when there is specificity in a perceptual acuity task, exogenous spatial attention can overcome that specificity and facilitate learning transfer to both untrained locations and features simultaneously with the same training. Thus, in addition to improving performance, exogenous attention generalizes perceptual learning across locations and features.
Collapse
Affiliation(s)
- Ian Donovan
- Department of Psychology and Neural Science, New York University,New York,NY,USA
| | - Angela Shen
- Department of Psychology, New York University,New York,NY,USA
| | | | - Antoine Barbot
- Department of Psychology, New York University,New York,NY,USA
- Center for Neural Science, New York University,New York,NY,USA
| | - Marisa Carrasco
- Department of Psychology, New York University,New York,NY,USA
- Center for Neural Science, New York University,New York,NY,USA
| |
Collapse
|
21
|
Kang DW, Kim D, Chang LH, Kim YH, Takahashi E, Cain MS, Watanabe T, Sasaki Y. Structural and Functional Connectivity Changes Beyond Visual Cortex in a Later Phase of Visual Perceptual Learning. Sci Rep 2018; 8:5186. [PMID: 29581455 PMCID: PMC5979999 DOI: 10.1038/s41598-018-23487-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2017] [Accepted: 03/13/2018] [Indexed: 11/09/2022] Open
Abstract
The neural mechanisms of visual perceptual learning (VPL) remain unclear. Previously we found that activation in the primary visual cortex (V1) increased in the early encoding phase of training, but returned to baseline levels in the later retention phase. To examine neural changes during the retention phase, we measured structural and functional connectivity changes using MRI. After weeks of training on a texture discrimination task, the fractional anisotropy of the inferior longitudinal fasciculus, a major tract connecting visual and anterior areas, was increased, as well as the functional connectivity between V1 and anterior regions mediated by the ILF. These changes were strongly correlated with behavioral performance improvements. These results suggest a two-phase model of VPL in which localized functional changes in V1 in the encoding phase of training are followed by changes in both structural and functional connectivity in ventral visual processing, perhaps leading to the long-term stabilization of VPL.
Collapse
Affiliation(s)
- Dong-Wha Kang
- Department of Neurology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, South Korea
| | - Dongho Kim
- Department of Neurology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, South Korea
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, 190 Thayer Street - BOX 1821, Providence, RI, 02912, USA
| | - Li-Hung Chang
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, 190 Thayer Street - BOX 1821, Providence, RI, 02912, USA
- Education Center for Humanities and Social Sciences and Institute of Neuroscience, National Yang-Ming University, No. 155, Sec. 2, Linong St, Taipei City, 112, Taiwan
| | - Yong-Hwan Kim
- Department of Neurology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, South Korea
| | - Emi Takahashi
- Division of Newborn Medicine, Department of Medicine, Boston Children's Hospital, 1 Autumn st. AU 453, Boston, MA, 02215, USA
| | - Matthew S Cain
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, 190 Thayer Street - BOX 1821, Providence, RI, 02912, USA
- U.S. Army Natick Soldier Research, Development, and Engineering Center, Natick, MA, 01760, USA
| | - Takeo Watanabe
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, 190 Thayer Street - BOX 1821, Providence, RI, 02912, USA
| | - Yuka Sasaki
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, 190 Thayer Street - BOX 1821, Providence, RI, 02912, USA.
| |
Collapse
|
22
|
Brefczynski-Lewis JA, Lewis JW. Auditory object perception: A neurobiological model and prospective review. Neuropsychologia 2017; 105:223-242. [PMID: 28467888 PMCID: PMC5662485 DOI: 10.1016/j.neuropsychologia.2017.04.034] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2016] [Revised: 04/27/2017] [Accepted: 04/27/2017] [Indexed: 12/15/2022]
Abstract
Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects". This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation. Though mostly derived from human neuroimaging literature, this theoretical framework highlights rudimentary principles of real-world sound processing that may apply to most if not all mammalian species with hearing and acoustic communication abilities. The model encompasses three basic categories of sound-source: (1) action sounds (non-vocalizations) produced by 'living things', with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by 'non-living things', including environmental sources and human-made machinery; and (3) vocalizations ('living things'), with human versus non-human animals as two subcategories therein. The model is presented in the context of cognitive architectures relating to multisensory, sensory-motor, and spoken language organizations. The models' predictive values are further discussed in the context of anthropological theories of oral communication evolution and the neurodevelopment of spoken language proto-networks in infants/toddlers. These phylogenetic and ontogenetic frameworks both entail cortical network maturations that are proposed to at least in part be organized around a number of universal acoustic-semantic signal attributes of natural sounds, which are addressed herein.
Collapse
Affiliation(s)
- Julie A Brefczynski-Lewis
- Blanchette Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA; Department of Physiology, Pharmacology, & Neuroscience, West Virginia University, PO Box 9229, Morgantown, WV 26506, USA
| | - James W Lewis
- Blanchette Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA; Department of Physiology, Pharmacology, & Neuroscience, West Virginia University, PO Box 9229, Morgantown, WV 26506, USA.
| |
Collapse
|
23
|
Abstract
Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.
Collapse
Affiliation(s)
- Barbara Dosher
- Department of Cognitive Sciences, Institute for Mathematical Behavioral Sciences, and Center for the Neurobiology of Learning and Behavior, University of California, Irvine, California 92617;
| | - Zhong-Lin Lu
- Department of Psychology, Center for Cognitive and Brain Sciences, and Center for Cognitive and Behavioral Brain Imaging, The Ohio State University, Columbus, Ohio 43210;
| |
Collapse
|
24
|
Grandison A, Sowden PT, Drivonikou VG, Notman LA, Alexander I, Davies IRL. Chromatic Perceptual Learning but No Category Effects without Linguistic Input. Front Psychol 2016; 7:731. [PMID: 27252669 PMCID: PMC4879779 DOI: 10.3389/fpsyg.2016.00731] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Accepted: 05/02/2016] [Indexed: 11/13/2022] Open
Abstract
Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest.
Collapse
Affiliation(s)
| | | | | | | | - Iona Alexander
- School of Psychology, University of SurreyGuildford, UK
- Nuffield Laboratory of Ophthalmology, University of OxfordOxford, UK
| | | |
Collapse
|
25
|
Fan JE, Turk-Browne NB, Taylor JA. Error-driven learning in statistical summary perception. J Exp Psychol Hum Percept Perform 2016; 42:266-80. [PMID: 26389617 PMCID: PMC4732887 DOI: 10.1037/xhp0000132] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We often interact with multiple objects at once, such as when balancing food and beverages on a dining tray. The success of these interactions relies upon representing not only individual objects, but also statistical summary features of the group (e.g., center-of-mass). Although previous research has established that humans can readily and accurately extract such statistical summary features, how this ability is acquired and refined through experience currently remains unaddressed. Here we ask if training and task feedback can improve summary perception. During training, participants practiced estimating the centroid (i.e., average location) of an array of objects on a touchscreen display. Before and after training, they completed a transfer test requiring perceptual discrimination of the centroid. Across 4 experiments, we manipulated the information in task feedback and how participants interacted with the objects during training. We found that vector error feedback, which conveys error both in terms of distance and direction, was the only form of feedback that improved perceptual discrimination of the centroid on the transfer test. Moreover, this form of feedback was effective only when coupled with reaching movements toward the visual objects. Taken together, these findings suggest that sensory-prediction error-signaling the mismatch between expected and actual consequences of an action-may play a previously unrecognized role in tuning perceptual representations. (PsycINFO Database Record
Collapse
|
26
|
Liu J, Dosher BA, Lu ZL. Augmented Hebbian reweighting accounts for accuracy and induced bias in perceptual learning with reverse feedback. J Vis 2015; 15:10. [PMID: 26418382 DOI: 10.1167/15.10.10] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Using an asymmetrical set of vernier stimuli (-15″, -10″, -5″, +10″, +15″) together with reverse feedback on the small subthreshold offset stimulus (-5″) induces response bias in performance (Aberg & Herzog, 2012; Herzog, Eward, Hermens, & Fahle, 2006; Herzog & Fahle, 1999). These conditions are of interest for testing models of perceptual learning because the world does not always present balanced stimulus frequencies or accurate feedback. Here we provide a comprehensive model for the complex set of asymmetric training results using the augmented Hebbian reweighting model (Liu, Dosher, & Lu, 2014; Petrov, Dosher, & Lu, 2005, 2006) and the multilocation integrated reweighting theory (Dosher, Jeter, Liu, & Lu, 2013). The augmented Hebbian learning algorithm incorporates trial-by-trial feedback, when present, as another input to the decision unit and uses the observer's internal response to update the weights otherwise; block feedback alters the weights on bias correction (Liu et al., 2014). Asymmetric training with reversed feedback incorporates biases into the weights between representation and decision. The model correctly predicts the basic induction effect, its dependence on trial-by-trial feedback, and the specificity of bias to stimulus orientation and spatial location, extending the range of augmented Hebbian reweighting accounts of perceptual learning.
Collapse
|
27
|
Mackrous I, Simoneau M. Improving spatial updating accuracy in absence of external feedback. Neuroscience 2015; 300:155-62. [PMID: 25987200 DOI: 10.1016/j.neuroscience.2015.05.024] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Revised: 04/23/2015] [Accepted: 05/11/2015] [Indexed: 10/23/2022]
Abstract
Updating the position of an earth-fixed target during whole-body rotation seems to rely on cognitive processes such as the utilization of external feedback. According to perceptual learning models, improvement in performance can also occur without external feedback. The aim of this study was to assess spatial updating improvement in the absence and in the presence of external feedback. While being rotated counterclockwise (CCW), participants had to predict when their body midline had crossed the position of a memorized target. Four experimental conditions were tested: (1) Pre-test: the target was presented 30° in the CCW direction from participant's midline. (2) Practice: the target was located 45° in the CCW direction from participant's midline. One group received external feedback about their spatial accuracy (Mackrous and Simoneau, 2014) while the other group did not. (3) Transfer T(30)CCW: the target was presented 30° in the CCW direction to evaluate whether improvement in performance, during practice, generalized to other target eccentricity. (4) Transfer T(30)CW: the target was presented 30° in the clockwise (CW) direction and participants were rotated CW. This transfer condition evaluated whether improvement in performance generalized to the untrained rotation direction. With practice, performance improved in the absence of external feedback (p=0.004). Nonetheless, larger improvement occurred when external feedback was provided (ps=0.002). During T(30)CCW, performance remained better for the feedback than the no-feedback group (p=0.005). However, no group difference was observed for the untrained direction (p=0.22). We demonstrated that spatial updating improved without external feedback but less than when external feedback was given. These observations are explained by a mixture of calibration processes and supervised vestibular learning.
Collapse
Affiliation(s)
- I Mackrous
- Département de kinésiologie, Faculté de médecine, Université Laval, Québec, QC, Canada; Centre de recherche du CHU de Québec, Québec, QC, Canada
| | - M Simoneau
- Département de kinésiologie, Faculté de médecine, Université Laval, Québec, QC, Canada; Centre de recherche du CHU de Québec, Québec, QC, Canada.
| |
Collapse
|
28
|
Jones PR, Moore DR, Shub DE, Amitay S. The role of response bias in perceptual learning. J Exp Psychol Learn Mem Cogn 2015; 41:1456-70. [PMID: 25867609 PMCID: PMC4562609 DOI: 10.1037/xlm0000111] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Sensory judgments improve with practice. Such perceptual learning is often thought to reflect an increase in perceptual sensitivity. However, it may also represent a decrease in response bias, with unpracticed observers acting in part on a priori hunches rather than sensory evidence. To examine whether this is the case, 55 observers practiced making a basic auditory judgment (yes/no amplitude-modulation detection or forced-choice frequency/amplitude discrimination) over multiple days. With all tasks, bias was present initially, but decreased with practice. Notably, this was the case even on supposedly “bias-free,” 2-alternative forced-choice, tasks. In those tasks, observers did not favor the same response throughout (stationary bias), but did favor whichever response had been correct on previous trials (nonstationary bias). Means of correcting for bias are described. When applied, these showed that at least 13% of perceptual learning on a forced-choice task was due to reduction in bias. In other situations, changes in bias were shown to obscure the true extent of learning, with changes in estimated sensitivity increasing once bias was corrected for. The possible causes of bias and the implications for our understanding of perceptual learning are discussed.
Collapse
Affiliation(s)
- Pete R Jones
- Medical Research Council (MRC) Institute of Hearing Research
| | - David R Moore
- Medical Research Council (MRC) Institute of Hearing Research
| | | | - Sygal Amitay
- Medical Research Council (MRC) Institute of Hearing Research
| |
Collapse
|
29
|
Tomaszczyk JC, Green NL, Frasca D, Colella B, Turner GR, Christensen BK, Green REA. Negative neuroplasticity in chronic traumatic brain injury and implications for neurorehabilitation. Neuropsychol Rev 2014; 24:409-27. [PMID: 25421811 PMCID: PMC4250564 DOI: 10.1007/s11065-014-9273-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2014] [Accepted: 09/29/2014] [Indexed: 02/04/2023]
Abstract
Based on growing findings of brain volume loss and deleterious white matter alterations during the chronic stages of injury, researchers posit that moderate-severe traumatic brain injury (TBI) may act to “age” the brain by reducing reserve capacity and inducing neurodegeneration. Evidence that these changes correlate with poorer cognitive and functional outcomes corroborates this progressive characterization of chronic TBI. Borrowing from a framework developed to explain cognitive aging (Mahncke et al., Progress in Brain Research, 157, 81–109, 2006a; Mahncke et al., Proceedings of the National Academy of Sciences of the United States of America, 103(33), 12523–12528, 2006b), we suggest here that environmental factors (specifically environmental impoverishment and cognitive disuse) contribute to a downward spiral of negative neuroplastic change that may modulate the brain changes described above. In this context, we review new literature supporting the original aging framework, and its extrapolation to chronic TBI. We conclude that negative neuroplasticity may be one of the mechanisms underlying cognitive and neural decline in chronic TBI, but that there are a number of points of intervention that would permit mitigation of this decline and better long-term clinical outcomes.
Collapse
Affiliation(s)
- Jennifer C Tomaszczyk
- Research Department, Toronto Rehabilitation Institute - University Health Network, Toronto, ON, Canada
| | | | | | | | | | | | | |
Collapse
|
30
|
Byers A, Serences JT. Enhanced attentional gain as a mechanism for generalized perceptual learning in human visual cortex. J Neurophysiol 2014; 112:1217-27. [PMID: 24920023 DOI: 10.1152/jn.00353.2014] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Learning to better discriminate a specific visual feature (i.e., a specific orientation in a specific region of space) has been associated with plasticity in early visual areas (sensory modulation) and with improvements in the transmission of sensory information from early visual areas to downstream sensorimotor and decision regions (enhanced readout). However, in many real-world scenarios that require perceptual expertise, observers need to efficiently process numerous exemplars from a broad stimulus class as opposed to just a single stimulus feature. Some previous data suggest that perceptual learning leads to highly specific neural modulations that support the discrimination of specific trained features. However, the extent to which perceptual learning acts to improve the discriminability of a broad class of stimuli via the modulation of sensory responses in human visual cortex remains largely unknown. Here, we used functional MRI and a multivariate analysis method to reconstruct orientation-selective response profiles based on activation patterns in the early visual cortex before and after subjects learned to discriminate small offsets in a set of grating stimuli that were rendered in one of nine possible orientations. Behavioral performance improved across 10 training sessions, and there was a training-related increase in the amplitude of orientation-selective response profiles in V1, V2, and V3 when orientation was task relevant compared with when it was task irrelevant. These results suggest that generalized perceptual learning can lead to modified responses in the early visual cortex in a manner that is suitable for supporting improved discriminability of stimuli drawn from a large set of exemplars.
Collapse
Affiliation(s)
- Anna Byers
- Department of Psychology, University of California, San Diego, California
| | - John T Serences
- Department of Psychology, University of California, San Diego, California Neurosciences Graduate Program, University of California, San Diego, California
| |
Collapse
|
31
|
Deluca C, Golzar A, Santandrea E, Lo Gerfo E, Eštočinová J, Moretto G, Fiaschi A, Panzeri M, Mariotti C, Tinazzi M, Chelazzi L. The cerebellum and visual perceptual learning: evidence from a motion extrapolation task. Cortex 2014; 58:52-71. [PMID: 24959702 DOI: 10.1016/j.cortex.2014.04.017] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2013] [Revised: 04/09/2014] [Accepted: 04/26/2014] [Indexed: 01/14/2023]
Abstract
Visual perceptual learning is widely assumed to reflect plastic changes occurring along the cerebro-cortical visual pathways, including at the earliest stages of processing, though increasing evidence indicates that higher-level brain areas are also involved. Here we addressed the possibility that the cerebellum plays an important role in visual perceptual learning. Within the realm of motor control, the cerebellum supports learning of new skills and recalibration of motor commands when movement execution is consistently perturbed (adaptation). Growing evidence indicates that the cerebellum is also involved in cognition and mediates forms of cognitive learning. Therefore, the obvious question arises whether the cerebellum might play a similar role in learning and adaptation within the perceptual domain. We explored a possible deficit in visual perceptual learning (and adaptation) in patients with cerebellar damage using variants of a novel motion extrapolation, psychophysical paradigm. Compared to their age- and gender-matched controls, patients with focal damage to the posterior (but not the anterior) cerebellum showed strongly diminished learning, in terms of both rate and amount of improvement over time. Consistent with a double-dissociation pattern, patients with focal damage to the anterior cerebellum instead showed more severe clinical motor deficits, indicative of a distinct role of the anterior cerebellum in the motor domain. The collected evidence demonstrates that a pure form of slow-incremental visual perceptual learning is crucially dependent on the intact cerebellum, bearing the notion that the human cerebellum acts as a learning device for motor, cognitive and perceptual functions. We interpret the deficit in terms of an inability to fine-tune predictive models of the incoming flow of visual perceptual input over time. Moreover, our results suggest a strong dissociation between the role of different portions of the cerebellum in motor versus non-motor functions, with only the posterior lobe being responsible for learning in the perceptual domain.
Collapse
Affiliation(s)
- Cristina Deluca
- Department of Neurological and Movement Sciences, University of Verona, Verona, Italy
| | - Ashkan Golzar
- Department of Neurological and Movement Sciences, University of Verona, Verona, Italy; Department of Physiology, McGill University, Montreal, Canada
| | - Elisa Santandrea
- Department of Neurological and Movement Sciences, University of Verona, Verona, Italy
| | - Emanuele Lo Gerfo
- Department of Neurological and Movement Sciences, University of Verona, Verona, Italy
| | - Jana Eštočinová
- Department of Neurological and Movement Sciences, University of Verona, Verona, Italy
| | | | - Antonio Fiaschi
- Department of Neurological and Movement Sciences, University of Verona, Verona, Italy; National Institute of Neuroscience, Verona, Italy
| | - Marta Panzeri
- Department of Genetics of Neurodegenerative and Metabolic Diseases, IRCCS Foundation Carlo Besta, Milan, Italy
| | - Caterina Mariotti
- Department of Genetics of Neurodegenerative and Metabolic Diseases, IRCCS Foundation Carlo Besta, Milan, Italy
| | - Michele Tinazzi
- Department of Neurological and Movement Sciences, University of Verona, Verona, Italy; National Institute of Neuroscience, Verona, Italy
| | - Leonardo Chelazzi
- Department of Neurological and Movement Sciences, University of Verona, Verona, Italy; National Institute of Neuroscience, Verona, Italy.
| |
Collapse
|
32
|
Modeling trial by trial and block feedback in perceptual learning. Vision Res 2014; 99:46-56. [PMID: 24423783 DOI: 10.1016/j.visres.2014.01.001] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2013] [Revised: 01/02/2014] [Accepted: 01/03/2014] [Indexed: 11/20/2022]
Abstract
Feedback has been shown to play a complex role in visual perceptual learning. It is necessary for performance improvement in some conditions while not others. Different forms of feedback, such as trial-by-trial feedback or block feedback, may both facilitate learning, but with different mechanisms. False feedback can abolish learning. We account for all these results with the Augmented Hebbian Reweight Model (AHRM). Specifically, three major factors in the model advance performance improvement: the external trial-by-trial feedback when available, the self-generated output as an internal feedback when no external feedback is available, and the adaptive criterion control based on the block feedback. Through simulating a comprehensive feedback study (Herzog & Fahle, 1997), we show that the model predictions account for the pattern of learning in seven major feedback conditions. The AHRM can fully explain the complex empirical results on the role of feedback in visual perceptual learning.
Collapse
|
33
|
Cohen Y, Daikhin L, Ahissar M. Perceptual learning is specific to the trained structure of information. J Cogn Neurosci 2013; 25:2047-60. [PMID: 23915051 DOI: 10.1162/jocn_a_00453] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
What do we learn when we practice a simple perceptual task? Many studies have suggested that we learn to refine or better select the sensory representations of the task-relevant dimension. Here we show that learning is specific to the trained structural regularities. Specifically, when this structure is modified after training with a fixed temporal structure, performance regresses to pretraining levels, even when the trained stimuli and task are retained. This specificity raises key questions as to the importance of low-level sensory modifications in the learning process. We trained two groups of participants on a two-tone frequency discrimination task for several days. In one group, a fixed reference tone was consistently presented in the first interval (the second tone was higher or lower), and in the other group the same reference tone was consistently presented in the second interval. When following training, these temporal protocols were switched between groups, performance of both groups regressed to pretraining levels, and further training was needed to attain postlearning performance. ERP measures, taken before and after training, indicated that participants implicitly learned the temporal regularity of the protocol and formed an attentional template that matched the trained structure of information. These results are consistent with Reverse Hierarchy Theory, which posits that even the learning of simple perceptual tasks progresses in a top-down manner, hence can benefit from temporal regularities at the trial level, albeit at the potential cost that learning may be specific to these regularities.
Collapse
|
34
|
Abstract
Improvements in performance on visual tasks due to practice are often specific to a retinal position or stimulus feature. Many researchers suggest that specific perceptual learning alters selective retinotopic representations in early visual analysis. However, transfer is almost always practically advantageous, and it does occur. If perceptual learning alters location-specific representations, how does it transfer to new locations? An integrated reweighting theory explains transfer over retinal locations by incorporating higher level location-independent representations into a multilevel learning system. Location transfer is mediated through location-independent representations, whereas stimulus feature transfer is determined by stimulus similarity at both location-specific and location-independent levels. Transfer to new locations/positions differs fundamentally from transfer to new stimuli. After substantial initial training on an orientation discrimination task, switches to a new location or position are compared with switches to new orientations in the same position, or switches of both. Position switches led to the highest degree of transfer, whereas orientation switches led to the highest levels of specificity. A computational model of integrated reweighting is developed and tested that incorporates the details of the stimuli and the experiment. Transfer to an identical orientation task in a new position is mediated via more broadly tuned location-invariant representations, whereas changing orientation in the same position invokes interference or independent learning of the new orientations at both levels, reflecting stimulus dissimilarity. Consistent with single-cell recording studies, perceptual learning alters the weighting of both early and midlevel representations of the visual system.
Collapse
|
35
|
Kumano H, Uka T. Neuronal mechanisms of visual perceptual learning. Behav Brain Res 2013; 249:75-80. [PMID: 23639245 DOI: 10.1016/j.bbr.2013.04.034] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2013] [Accepted: 04/19/2013] [Indexed: 10/26/2022]
Abstract
Numerous psychophysical studies have described perceptual learning as long-lasting improvements in perceptual discrimination and detection capabilities following practice. Where and how long-term plastic changes occur in the brain is central to understanding the neural basis of perceptual learning. Here, neurophysiological research using non-human primates is reviewed to address the neural mechanisms underlying visual perceptual learning. Previous studies have shown that training either has no effect on or only weakly alters the sensitivity of neurons in early visual areas, but more recent evidence indicates that training can cause long-term changes in how sensory signals are read out in the later stages of decision making. These results are discussed in the context of learning specificity, which has been crucial in interpreting the mechanisms underlying perceptual learning. The possible mechanisms that support learning-related plasticity are also discussed.
Collapse
Affiliation(s)
- Hironori Kumano
- Department of Neurophysiology, Graduate School of Medicine, Juntendo University, 2-1-1 Hongo, Bunkyo, Tokyo 113-8421, Japan
| | | |
Collapse
|
36
|
McGovern DP, Webb BS, Peirce JW. Transfer of perceptual learning between different visual tasks. J Vis 2012; 12:4. [PMID: 23048211 DOI: 10.1167/12.11.4] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this 'perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a 'global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks.
Collapse
Affiliation(s)
- David P McGovern
- Nottingham Visual Neuroscience, School of Psychology, The University of Nottingham, Nottingham, UK.
| | | | | |
Collapse
|
37
|
Exploring the relationship between perceptual learning and top-down attentional control. Vision Res 2012; 74:30-9. [PMID: 22850344 DOI: 10.1016/j.visres.2012.07.008] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2012] [Revised: 06/07/2012] [Accepted: 07/14/2012] [Indexed: 11/22/2022]
Abstract
Here, we review the role of top-down attention in both the acquisition and the expression of perceptual learning, as well as the role of learning in more efficiently guiding attentional modulations. Although attention often mediates learning at the outset of training, many of the characteristic behavioral and neural changes associated with learning can be observed even when stimuli are task irrelevant and ignored. However, depending on task demands, attention can override the effects of perceptual learning, suggesting that even if top-down factors are not strictly necessary to observe learning, they play a critical role in determining how learning-related changes in behavior and neural activity are ultimately expressed. In turn, training may also act to optimize the effectiveness of top-down attentional control by improving the efficiency of sensory gain modulations, regulating intrinsic noise, and altering the read-out of sensory information.
Collapse
|
38
|
Different properties of visual relearning after damage to early versus higher-level visual cortical areas. J Neurosci 2012; 32:5414-25. [PMID: 22514305 DOI: 10.1523/jneurosci.0316-12.2012] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The manipulation of visual perceptual learning is emerging as an important rehabilitation tool following visual system damage. Specificity of visual learning for training stimulus and task attributes has been used in prior work to infer a differential contribution of higher-level versus lower-level visual cortical areas to this process. The present study used a controlled experimental paradigm in felines to examine whether relearning of motion discrimination and the specificity of such relearning are differently influenced by damage at lower versus higher levels of the visual cortical hierarchy. Cats with damage to either early visual areas 17,18, and 19, or to higher-level, motion-processing lateral suprasylvian (LS) cortex were trained to perform visual tasks with controlled fixation. Animals with either type of lesion could relearn to discriminate the direction of motion of both drifting gratings and random dot stimuli in their impaired visual field. However, two factors emerged as critical for allowing transfer of learning to untrained motion stimuli: (1) an intact LS cortex and (2) more complex visual stimuli. Thus, while the hierarchical level of visual cortex damage did not seem to limit the ability to relearn motion discriminations, generalizability of relearning with a damaged visual system appeared to be influenced by both the areas damaged and the nature of the stimulus used during training.
Collapse
|
39
|
Liu J, Lu ZL, Dosher BA. Mixed training at high and low accuracy levels leads to perceptual learning without feedback. Vision Res 2011; 61:15-24. [PMID: 22227159 DOI: 10.1016/j.visres.2011.12.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2011] [Revised: 12/09/2011] [Accepted: 12/13/2011] [Indexed: 11/26/2022]
Abstract
In this study, we investigated whether mixing easy and difficult trials can lead to learning in the difficult conditions. We hypothesized that while feedback is necessary for significant learning in training regimes consisting solely of low training accuracy trials, training mixtures with sufficient proportions of high accuracy training trials would lead to significant learning without feedback. Thirty-six subjects were divided into one experimental group in which trials with high training accuracy were mixed with those with low training accuracy and no feedback, and five control groups in which high and low accuracy training were mixed in the presence of feedback; high and high training accuracy were mixed or low and low training accuracy were mixed with and without feedback trials. Contrast threshold improved significantly in the low accuracy condition in the presence of high training accuracy trials (the high-low mixture group) in the absence of feedback, although no significant learning was found in the low accuracy condition in the group with the low-low mixture without feedback. Moreover, the magnitude of improvement in low accuracy trials without feedback in the high-low training mixture is comparable to that in the high accuracy training without feedback condition and those obtained in the presence of trial-by-trial external feedback. The results are both qualitatively and quantitatively consistent with the predictions of the Augmented Hebbian Re-Weighting model. We conclude that mixed training at high and low accuracy levels can lead to perceptual learning at low training accuracy levels without feedback.
Collapse
Affiliation(s)
- Jiajuan Liu
- Laboratory of Brain Processes (LOBES), Neuroscience Graduate Program, Department of Biological Science, University of Southern California, Los Angeles, CA 90089-1061, United States
| | | | | |
Collapse
|
40
|
Huang CB, Lu ZL, Dosher BA. Co-learning analysis of two perceptual learning tasks with identical input stimuli supports the reweighting hypothesis. Vision Res 2011; 61:25-32. [PMID: 22100814 DOI: 10.1016/j.visres.2011.11.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2011] [Revised: 11/02/2011] [Accepted: 11/02/2011] [Indexed: 11/19/2022]
Abstract
Perceptual learning, even when it exhibits significant specificity to basic stimulus features such as retinal location or spatial frequency, may cause discrimination performance to improve either through enhancement of early sensory representations or through selective re-weighting of connections from the sensory representations to specific responses, or both. For most experiments in the literature, the two forms of plasticity make similar predictions (Dosher & Lu, 2009; Petrov, Dosher, & Lu, 2005). The strongest test of the two hypotheses must use training and transfer tasks that rely on the same sensory representation with different task-dependent decision structures. If training changes sensory representations, transfer (or interference) must occur since the (changed) sensory representations are common. If instead training re-weights a separate set of task connections to decision, then performance in the two tasks may still be independent. Here, we performed a co-learning analysis of two perceptual learning tasks based on identical input stimuli, following a very interesting study of Fahle and Morgan (1996) who used nearly identical input stimuli (a three dot pattern) in training bisection and vernier tasks. Two important modifications were made: (1) identical input stimuli were used in the two tasks, and (2) subjects practiced both tasks in multiple alternating blocks (800 trials/block). Two groups of subjects with counter-balanced order of training participated in the experiments. We found significant and independent learning of the two tasks. The pattern of results is consistent with the reweighting hypothesis of perceptual learning.
Collapse
Affiliation(s)
- Chang-Bing Huang
- Laboratory of Brain Processes (LOBES), Departments of Psychology, University of Southern California, Los Angeles, CA 90089, USA
| | | | | |
Collapse
|
41
|
Lu ZL, Hua T, Huang CB, Zhou Y, Dosher BA. Visual perceptual learning. Neurobiol Learn Mem 2011; 95:145-51. [PMID: 20870024 PMCID: PMC3021105 DOI: 10.1016/j.nlm.2010.09.010] [Citation(s) in RCA: 69] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2010] [Revised: 09/15/2010] [Accepted: 09/18/2010] [Indexed: 11/29/2022]
Abstract
Perceptual learning refers to the phenomenon that practice or training in perceptual tasks often substantially improves perceptual performance. Often exhibiting stimulus or task specificities, perceptual learning differs from learning in the cognitive or motor domains. Research on perceptual learning reveals important plasticity in adult perceptual systems, and as well as the limitations in the information processing of the human observer. In this article, we review the behavioral results, mechanisms, physiological basis, computational models, and applications of visual perceptual learning.
Collapse
Affiliation(s)
- Zhong-Lin Lu
- Department of Psychology, University of Southern California, Los Angeles, CA 90089, USA.
| | | | | | | | | |
Collapse
|
42
|
Jeter PE, Dosher BA, Liu SH, Lu ZL. Specificity of perceptual learning increases with increased training. Vision Res 2010; 50:1928-40. [PMID: 20624413 PMCID: PMC3346951 DOI: 10.1016/j.visres.2010.06.016] [Citation(s) in RCA: 92] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2009] [Revised: 06/25/2010] [Accepted: 06/29/2010] [Indexed: 11/16/2022]
Abstract
Perceptual learning often shows substantial and long-lasting changes in the ability to classify relevant perceptual stimuli due to practice. Specificity to trained stimuli and tasks is a key characteristic of visual perceptual learning, but little is known about whether specificity depends upon the extent of initial training. Using an orientation discrimination task, we demonstrate that specificity follows after extensive training, while the earliest stages of perceptual learning exhibit substantial transfer to a new location and an opposite orientation. Brief training shows the best performance at the point of transfer. These results for orientation-location transfer have both theoretical and practical implications for understanding perceptual expertise.
Collapse
Affiliation(s)
- Pamela E Jeter
- Memory, Attention and Perception Laboratory (MAP), Department of Cognitive Sciences, University of California, Irvine, CA 92697, USA.
| | | | | | | |
Collapse
|
43
|
Lewis JW, Talkington WJ, Puce A, Engel LR, Frum C. Cortical networks representing object categories and high-level attributes of familiar real-world action sounds. J Cogn Neurosci 2010; 23:2079-101. [PMID: 20812786 DOI: 10.1162/jocn.2010.21570] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In contrast to visual object processing, relatively little is known about how the human brain processes everyday real-world sounds, transforming highly complex acoustic signals into representations of meaningful events or auditory objects. We recently reported a fourfold cortical dissociation for representing action (nonvocalization) sounds correctly categorized as having been produced by human, animal, mechanical, or environmental sources. However, it was unclear how consistent those network representations were across individuals, given potential differences between each participant's degree of familiarity with the studied sounds. Moreover, it was unclear what, if any, auditory perceptual attributes might further distinguish the four conceptual sound-source categories, potentially revealing what might drive the cortical network organization for representing acoustic knowledge. Here, we used functional magnetic resonance imaging to test participants before and after extensive listening experience with action sounds, and tested for cortices that might be sensitive to each of three different high-level perceptual attributes relating to how a listener associates or interacts with the sound source. These included the sound's perceived concreteness, effectuality (ability to be affected by the listener), and spatial scale. Despite some variation of networks for environmental sounds, our results verified the stability of a fourfold dissociation of category-specific networks for real-world action sounds both before and after familiarity training. Additionally, we identified cortical regions parametrically modulated by each of the three high-level perceptual sound attributes. We propose that these attributes contribute to the network-level encoding of category-specific acoustic knowledge representations.
Collapse
Affiliation(s)
- James W Lewis
- Department of Physiology and Pharmacology, PO Box 9229, West Virginia University, Morgantown, WV 26506, USA.
| | | | | | | | | |
Collapse
|
44
|
Xu P, Lu ZL, Wang X, Dosher B, Zhou J, Zhang D, Zhou Y. Category and perceptual learning in subjects with treated Wilson's disease. PLoS One 2010; 5:e9635. [PMID: 20224790 PMCID: PMC2835763 DOI: 10.1371/journal.pone.0009635] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2009] [Accepted: 02/16/2010] [Indexed: 11/19/2022] Open
Abstract
To explore the relationship between category and perceptual learning, we examined both category and perceptual learning in patients with treated Wilson's disease (WD), whose basal ganglia, known to be important in category learning, were damaged by the disease. We measured their learning rate and accuracy in rule-based and information-integration category learning, and magnitudes of perceptual learning in a wide range of external noise conditions, and compared the results with those of normal controls. The WD subjects exhibited deficits in both forms of category learning and in perceptual learning in high external noise. However, their perceptual learning in low external noise was relatively spared. There was no significant correlation between the two forms of category learning, nor between perceptual learning in low external noise and either form of category learning. Perceptual learning in high external noise was, however, significantly correlated with information-integration but not with rule-based category learning. The results suggest that there may be a strong link between information-integration category learning and perceptual learning in high external noise. Damage to brain structures that are important for information-integration category learning may lead to poor perceptual learning in high external noise, yet spare perceptual learning in low external noise. Perceptual learning in high and low external noise conditions may involve separate neural substrates.
Collapse
Affiliation(s)
- Pengjing Xu
- Department of Neurobiology and Biophysics, School of Life Sciences, University of Science and Technology of China, Hefei, Anhui, People's Republic of China
| | - Zhong-Lin Lu
- Laboratory of Brain Processes (LOBES), Departments of Psychology and Biomedical Engineering, and Neuroscience Graduate Program, University of Southern California, Los Angeles, California, United States of America
| | - Xiaoping Wang
- Department of Neurobiology and Biophysics, School of Life Sciences, University of Science and Technology of China, Hefei, Anhui, People's Republic of China
| | - Barbara Dosher
- Department of Cognitive Science, University of California Irvine, Irvine, California, United States of America
| | - Jiangning Zhou
- Department of Neurobiology and Biophysics, School of Life Sciences, University of Science and Technology of China, Hefei, Anhui, People's Republic of China
| | - Daren Zhang
- Department of Neurobiology and Biophysics, School of Life Sciences, University of Science and Technology of China, Hefei, Anhui, People's Republic of China
| | - Yifeng Zhou
- Department of Neurobiology and Biophysics, School of Life Sciences, University of Science and Technology of China, Hefei, Anhui, People's Republic of China
- Visual Information Processing Laboratory, Institute of Biophysics, Chinese Academy of Sciences, Beijing, People's Republic of China
| |
Collapse
|
45
|
Lu ZL, Liu J, Dosher BA. Modeling mechanisms of perceptual learning with augmented Hebbian re-weighting. Vision Res 2010; 50:375-90. [PMID: 19732786 PMCID: PMC2824067 DOI: 10.1016/j.visres.2009.08.027] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2009] [Revised: 08/27/2009] [Accepted: 08/28/2009] [Indexed: 10/20/2022]
Abstract
Using the external noise plus training paradigm, we have consistently found that two independent mechanisms, stimulus enhancement and external noise exclusion, support perceptual learning in a range of tasks. Here, we show that re-weighting of stable early sensory representations through Hebbian learning (Petrov et al., 2005, 2006) can generate performance patterns that parallel a large range of empirical data: (1) perceptual learning reduced contrast thresholds at all levels of external noise in peripheral orientation identification (Dosher & Lu, 1998, 1999), (2) training with low noise exemplars transferred to performance in high noise, while training with exemplars embedded in high external noise transferred little to performance in low noise (Dosher & Lu, 2005), and (3) pre-training in high external noise only reduced subsequent learning in high external noise, whereas pre-training in zero external noise left very little additional learning in all the external noise conditions (Lu et al., 2006). In the augmented Hebbian re-weighting model (AHRM), perceptual learning strengthens or maintains the connections between the most closely tuned visual channels and a learned categorization structure, while it prunes or reduces inputs from task-irrelevant channels. Reducing the weights on irrelevant channels reduces the contributions of external noise and additive internal noise. Manifestation of stimulus enhancement or external noise exclusion depends on the initial state of internal noise and connection weights in the beginning of a learning task. Both mechanisms reflect re-weighting of stable early sensory representations.
Collapse
Affiliation(s)
- Zhong-Lin Lu
- Laboratory of Brain Processes (LOBES), Dana and David Dornsife Cognitive Neuroscience Imaging Center, Department of Psychology, University of Southern California, Los Angeles, CA 90089-1061, USA.
| | | | | |
Collapse
|
46
|
Dosher BA, Han S, Lu ZL. Perceptual learning and attention: Reduction of object attention limitations with practice. Vision Res 2010; 50:402-15. [PMID: 19796653 PMCID: PMC3345174 DOI: 10.1016/j.visres.2009.09.010] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2009] [Revised: 09/01/2009] [Accepted: 09/15/2009] [Indexed: 11/27/2022]
Abstract
Perceptual learning has widely been claimed to be attention driven; attention assists in choosing the relevant sensory information and attention may be necessary in many cases for learning. In this paper, we focus on the interaction of perceptual learning and attention - that perceptual learning can reduce or eliminate the limitations of attention, or, correspondingly, that perceptual learning depends on the attention condition. Object attention is a robust limit on performance. Two attributes of a single attended object may be reported without loss, while the same two attributes of different objects can exhibit a substantial dual-report deficit due to the sharing of attention between objects. The current experiments document that this fundamental dual-object report deficit can be reduced, or eliminated, through perceptual learning that is partially specific to retinal location. This suggests that alternative routes established by practice may reduce the competition between objects for processing resources.
Collapse
Affiliation(s)
- Barbara Anne Dosher
- Memory, Attention, Perception Laboratory, Department of Cognitive Sciences and Institute of Mathematical Behavioral Sciences, University of California, Irvine, CA 92697, USA.
| | | | | |
Collapse
|
47
|
Silverstein SM, Keane BP. Perceptual organization in schizophrenia: Plasticity and state-related change. ACTA ACUST UNITED AC 2009. [DOI: 10.1556/lp.1.2009.2.111] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|