1
|
Stability of individual differences in implicitly guided attention. Q J Exp Psychol (Hove) 2024; 77:1332-1351. [PMID: 37572022 DOI: 10.1177/17470218231196463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/14/2023]
Abstract
Daily activities often occur in familiar environments, affording us an opportunity to learn. Laboratory studies have shown that people readily acquire an implicit spatial preference for locations that frequently contained a search target in the past. These studies, however, have focused on group characteristics, downplaying the significance of individual differences. In a pre-registered study, we examined the stability of individual differences in two variants of an implicit location probability learning (LPL) task. We tested the possibility that individual differences were stable in variants that shared the same search process, but not in variants involving different search processes. In Experiment 1, participants performed alternating blocks of T-among-Ls and 5-among-2s search tasks. Unbeknownst to them, the search target appeared disproportionately often in one region of space; the high-probability regions differed between the two tasks. LPL transferred between the two tasks. In addition, individuals who showed greater LPL in the T-task also did so in the 5-task and vice versa. In Experiment 2, participants searched for either a camouflaged-T against background noise or a well-segmented T among well-segmented Ls. These two tasks produced task-specific learning that did not transfer between tasks. Moreover, individual differences in learning did not correlate between tasks. Thus, LPL is associated with stable individual differences across variants, but only when the variants share common search processes.
Collapse
|
2
|
Semantic and Syntactic Predictions in Reading Aloud: Are Good Predictors Good Statistical Learners? J Cogn 2024; 7:40. [PMID: 38737818 PMCID: PMC11086592 DOI: 10.5334/joc.363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 04/05/2024] [Indexed: 05/14/2024] Open
Abstract
Recent research suggests that becoming a fluent reader may partially rely on a domain-general statistical learning (SL) mechanism that allows a person to automatically extract predictable patterns from the sensory input. The goal of the present study was to investigate a potential link between SL and the ability to make linguistic predictions. All previous studies investigated quite general levels of reading ability rather than the dynamic process of making linguistic predictions. We thus used a recently developed predictive reading task, which consisted of having participants read aloud words that were preceded by either semantically or syntactically predictive contexts. To measure the componential nature of SL, we used a visual and an auditory SL task (VSL, ASL) and the classic serial reaction time task (SRT). General reading ability was assessed with a reading speed/comprehension test. The study was conducted online on a sample of 120 participants to make it possible to explore interindividual differences. The results showed only weak and sometimes even negative correlations between the various SL measures. ASL correlated positively and predicted general reading ability but neither semantic nor syntactic prediction effects. Similarly, one of the SRT measures was significantly associated with reading level and reading speed but not with linguistic prediction effects. In sum, there is little evidence that domain-general SL is a good predictor of people's ability to make domain-specific linguistic predictions. In contrast, SL shows a weak but significant association with general reading ability.
Collapse
|
3
|
Pupil diameter as an indicator of sound pair familiarity after statistically structured auditory sequence. Sci Rep 2024; 14:8739. [PMID: 38627572 PMCID: PMC11021535 DOI: 10.1038/s41598-024-59302-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 04/09/2024] [Indexed: 04/19/2024] Open
Abstract
Inspired by recent findings in the visual domain, we investigated whether the stimulus-evoked pupil dilation reflects temporal statistical regularities in sequences of auditory stimuli. We conducted two preregistered pupillometry experiments (experiment 1, n = 30, 21 females; experiment 2, n = 31, 22 females). In both experiments, human participants listened to sequences of spoken vowels in two conditions. In the first condition, the stimuli were presented in a random order and, in the second condition, the same stimuli were presented in a sequence structured in pairs. The second experiment replicated the first experiment with a modified timing and number of stimuli presented and without participants being informed about any sequence structure. The sound-evoked pupil dilation during a subsequent familiarity task indicated that participants learned the auditory vowel pairs of the structured condition. However, pupil diameter during the structured sequence did not differ according to the statistical regularity of the pair structure. This contrasts with similar visual studies, emphasizing the susceptibility of pupil effects during statistically structured sequences to experimental design settings in the auditory domain. In sum, our findings suggest that pupil diameter may serve as an indicator of sound pair familiarity but does not invariably respond to task-irrelevant transition probabilities of auditory sequences.
Collapse
|
4
|
Response times are affected by mispredictions in a stochastic game. Sci Rep 2024; 14:8446. [PMID: 38600186 PMCID: PMC11006944 DOI: 10.1038/s41598-024-58203-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 03/26/2024] [Indexed: 04/12/2024] Open
Abstract
Acting as a goalkeeper in a video-game, a participant is asked to predict the successive choices of the penalty taker. The sequence of choices of the penalty taker is generated by a stochastic chain with memory of variable length. It has been conjectured that the probability distribution of the response times is a function of the specific sequence of past choices governing the algorithm used by the penalty taker to make his choice at each step. We found empirical evidence that besides this dependence, the distribution of the response times depends also on the success or failure of the previous prediction made by the participant. Moreover, we found statistical evidence that this dependence propagates up to two steps forward after the prediction failure.
Collapse
|
5
|
Individual differences in auditory perception predict learning of non-adjacent tone sequences in 3-year-olds. Front Hum Neurosci 2024; 18:1358380. [PMID: 38638804 PMCID: PMC11024384 DOI: 10.3389/fnhum.2024.1358380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 03/15/2024] [Indexed: 04/20/2024] Open
Abstract
Auditory processing of speech and non-speech stimuli oftentimes involves the analysis and acquisition of non-adjacent sound patterns. Previous studies using speech material have demonstrated (i) children's early emerging ability to extract non-adjacent dependencies (NADs) and (ii) a relation between basic auditory perception and this ability. Yet, it is currently unclear whether children show similar sensitivities and similar perceptual influences for NADs in the non-linguistic domain. We conducted an event-related potential study with 3-year-old children using a sine-tone-based oddball task, which simultaneously tested for NAD learning and auditory perception by means of varying sound intensity. Standard stimuli were A × B sine-tone sequences, in which specific A elements predicted specific B elements after variable × elements. NAD deviants violated the dependency between A and B and intensity deviants were reduced in amplitude. Both elicited similar frontally distributed positivities, suggesting successful deviant detection. Crucially, there was a predictive relationship between the amplitude of the sound intensity discrimination effect and the amplitude of the NAD learning effect. These results are taken as evidence that NAD learning in the non-linguistic domain is functional in 3-year-olds and that basic auditory processes are related to the learning of higher-order auditory regularities also outside the linguistic domain.
Collapse
|
6
|
Limited but specific engagement of the mature language network during linguistic statistical learning. Cereb Cortex 2024; 34:bhae123. [PMID: 38566510 PMCID: PMC10987970 DOI: 10.1093/cercor/bhae123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 03/04/2024] [Accepted: 03/05/2024] [Indexed: 04/04/2024] Open
Abstract
Statistical learning (SL) is the ability to detect and learn regularities from input and is foundational to language acquisition. Despite the dominant role of SL as a theoretical construct for language development, there is a lack of direct evidence supporting the shared neural substrates underlying language processing and SL. It is also not clear whether the similarities, if any, are related to linguistic processing, or statistical regularities in general. The current study tests whether the brain regions involved in natural language processing are similarly recruited during auditory, linguistic SL. Twenty-two adults performed an auditory linguistic SL task, an auditory nonlinguistic SL task, and a passive story listening task as their neural activation was monitored. Within the language network, the left posterior temporal gyrus showed sensitivity to embedded speech regularities during auditory, linguistic SL, but not auditory, nonlinguistic SL. Using a multivoxel pattern similarity analysis, we uncovered similarities between the neural representation of auditory, linguistic SL, and language processing within the left posterior temporal gyrus. No other brain regions showed similarities between linguistic SL and language comprehension, suggesting that a shared neurocomputational process for auditory SL and natural language processing within the left posterior temporal gyrus is specific to linguistic stimuli.
Collapse
|
7
|
Contribution of statistical learning in learning to read across languages. PLoS One 2024; 19:e0298670. [PMID: 38527080 PMCID: PMC10962809 DOI: 10.1371/journal.pone.0298670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 01/26/2024] [Indexed: 03/27/2024] Open
Abstract
Statistical Learning (SL) refers to human's ability to detect regularities from environment Kirkham, N. Z. (2002) & Saffran, J. R. (1996). There has been a growing interest in understanding how sensitivity to statistical regularities influences learning to read. The current study systematically examined whether and how non-linguistic SL, Chinese SL, and English SL contribute to Chinese and English word reading among native Chinese-speaking 4th, 6th and 8th graders who learn English as a second language (L2). Children showed above-chance learning across all SL tasks and across all grades. In addition, developmental improvements were shown across at least two of the three grade ranges on all SL tasks. In terms of the contribution of SL to reading, non-linguistic auditory SL (ASL), English visual SL (VSL), and Chinese ASL accounted for a significant amount of variance in English L2 word reading. Non-linguistic ASL, Chinese VSL, English VSL, and English ASL accounted for a significant amount of variance in Chinese word reading. Our results provide clear and novel evidence for cross-linguistic contribution from Chinese SL to English reading, and from English SL to Chinese reading, highlighting a bi-directional relationship between SL in one language and reading in another language.
Collapse
|
8
|
Reliability of individual differences in distractor suppression driven by statistical learning. Behav Res Methods 2024; 56:2437-2451. [PMID: 37491558 PMCID: PMC10991004 DOI: 10.3758/s13428-023-02157-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/02/2023] [Indexed: 07/27/2023]
Abstract
A series of recent studies has demonstrated that attentional selection is modulated by statistical regularities, even when they concern task-irrelevant stimuli. Irrelevant distractors presented more frequently at one location interfere less with search than distractors presented elsewhere. To account for this finding, it has been proposed that through statistical learning, the frequent distractor location becomes suppressed relative to the other locations. Learned distractor suppression has mainly been studied at the group level, where individual differences are treated as unexplained error variance. Yet these individual differences may provide important mechanistic insights and could be predictive of cognitive and real-life outcomes. In the current study, we ask whether in an additional singleton task, the standard measures of attentional capture and learned suppression are reliable and stable at the level of the individual. In an online study, we assessed both the within- and between-session reliability of individual-level measures of attentional capture and learned suppression. We show that the measures of attentional capture, but not of distractor suppression, are moderately stable within the same session (i.e., split-half reliability). Test-retest reliability over a 2-month period was found to be moderate for attentional capture but weak or absent for suppression. RT-based measures proved to be superior to accuracy measures. While producing very robust findings at the group level, the predictive validity of these RT-based measures is still limited when it comes to individual-level performance. We discuss the implications for future research drawing on inter-individual variation in the attentional biases that result from statistical learning.
Collapse
|
9
|
Hebbian learning can explain rhythmic neural entrainment to statistical regularities. Dev Sci 2024:e13487. [PMID: 38372153 DOI: 10.1111/desc.13487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 12/26/2023] [Accepted: 01/29/2024] [Indexed: 02/20/2024]
Abstract
In many domains, learners extract recurring units from continuous sequences. For example, in unknown languages, fluent speech is perceived as a continuous signal. Learners need to extract the underlying words from this continuous signal and then memorize them. One prominent candidate mechanism is statistical learning, whereby learners track how predictive syllables (or other items) are of one another. Syllables within the same word predict each other better than syllables straddling word boundaries. But does statistical learning lead to memories of the underlying words-or just to pairwise associations among syllables? Electrophysiological results provide the strongest evidence for the memory view. Electrophysiological responses can be time-locked to statistical word boundaries (e.g., N400s) and show rhythmic activity with a periodicity of word durations. Here, I reproduce such results with a simple Hebbian network. When exposed to statistically structured syllable sequences (and when the underlying words are not excessively long), the network activation is rhythmic with the periodicity of a word duration and activation maxima on word-final syllables. This is because word-final syllables receive more excitation from earlier syllables with which they are associated than less predictable syllables that occur earlier in words. The network is also sensitive to information whose electrophysiological correlates were used to support the encoding of ordinal positions within words. Hebbian learning can thus explain rhythmic neural activity in statistical learning tasks without any memory representations of words. Learners might thus need to rely on cues beyond statistical associations to learn the words of their native language. RESEARCH HIGHLIGHTS: Statistical learning may be utilized to identify recurring units in continuous sequences (e.g., words in fluent speech) but may not generate explicit memory for words. Exposure to statistically structured sequences leads to rhythmic activity with a period of the duration of the underlying units (e.g., words). I show that a memory-less Hebbian network model can reproduce this rhythmic neural activity as well as putative encodings of ordinal positions observed in earlier research. Direct tests are needed to establish whether statistical learning leads to declarative memories for words.
Collapse
|
10
|
Specificity of Motor Contributions to Auditory Statistical Learning. J Cogn 2024; 7:25. [PMID: 38370867 PMCID: PMC10870951 DOI: 10.5334/joc.351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 01/31/2024] [Indexed: 02/20/2024] Open
Abstract
Statistical learning is the ability to extract patterned information from continuous sensory signals. Recent evidence suggests that auditory-motor mechanisms play an important role in auditory statistical learning from speech signals. The question remains whether auditory-motor mechanisms support such learning generally or in a domain-specific manner. In Experiment 1, we tested the specificity of motor processes contributing to learning patterns from speech sequences. Participants either whispered or clapped their hands while listening to structured speech. In Experiment 2, we focused on auditory specificity, testing whether whispering equally affects learning patterns from speech and non-speech sequences. Finally, in Experiment 3, we examined whether learning patterns from speech and non-speech sequences are correlated. Whispering had a stronger effect than clapping on learning patterns from speech sequences in Experiment 1. Moreover, whispering impaired statistical learning more strongly from speech than non-speech sequences in Experiment 2. Interestingly, while participants in the non-speech tasks spontaneously synchronized their motor movements with the auditory stream more than participants in the speech tasks, the effect of the motor movements on learning was stronger in the speech domain. Finally, no correlation between speech and non-speech learning was observed. Overall, our findings support the idea that learning statistical patterns from speech versus non-speech relies on segregated mechanisms, and that the speech motor system contributes to auditory statistical learning in a highly specific manner.
Collapse
|
11
|
Playing hide and seek: Contextual regularity learning develops between 3 and 5 years of age. J Exp Child Psychol 2024; 238:105795. [PMID: 37862788 DOI: 10.1016/j.jecp.2023.105795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 09/20/2023] [Accepted: 09/21/2023] [Indexed: 10/22/2023]
Abstract
The ability to acquire contextual regularities is fundamental in everyday life because it helps us to navigate the environment, directing our attention where relevant events are more likely to occur. Sensitivity to spatial regularities has been largely reported from infancy. Nevertheless, it is currently unclear when children can use this rapidly acquired contextual knowledge to guide their behavior. Evidence of this ability is indeed mixed in school-aged children and, to date, it has never been explored in younger children and toddlers. The current study investigated the development of contextual regularity learning in children aged 3 to 5 years. To this aim, we designed a new contextual learning paradigm in which young children were presented with recurring configurations of bushes and were asked to guess behind which bush a cartoon monkey was hiding. In a series of two experiments, we manipulated the relevance of color and visuospatial cues for the underlying task goal and tested how this affected young children's behavior. Our results bridge the gap between the infant and adult literatures, showing that sensitivity to spatial configurations persists from infancy to childhood, but it is only around the fifth year of life that children naturally start to integrate multiple cues to guide their behavior.
Collapse
|
12
|
Cognitive and sensory expectations independently shape musical expectancy and pleasure. Philos Trans R Soc Lond B Biol Sci 2024; 379:20220420. [PMID: 38104601 PMCID: PMC10725761 DOI: 10.1098/rstb.2022.0420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 10/20/2023] [Indexed: 12/19/2023] Open
Abstract
Expectation is crucial for our enjoyment of music, yet the underlying generative mechanisms remain unclear. While sensory models derive predictions based on local acoustic information in the auditory signal, cognitive models assume abstract knowledge of music structure acquired over the long term. To evaluate these two contrasting mechanisms, we compared simulations from four computational models of musical expectancy against subjective expectancy and pleasantness ratings of over 1000 chords sampled from 739 US Billboard pop songs. Bayesian model comparison revealed that listeners' expectancy and pleasantness ratings were predicted by the independent, non-overlapping, contributions of cognitive and sensory expectations. Furthermore, cognitive expectations explained over twice the variance in listeners' perceived surprise compared to sensory expectations, suggesting a larger relative importance of long-term representations of music structure over short-term sensory-acoustic information in musical expectancy. Our results thus emphasize the distinct, albeit complementary, roles of cognitive and sensory expectations in shaping musical pleasure, and suggest that this expectancy-driven mechanism depends on musical information represented at different levels of abstraction along the neural hierarchy. This article is part of the theme issue 'Art, aesthetics and predictive processing: theoretical and empirical perspectives'.
Collapse
|
13
|
The effect of hippocampal subfield damage on rapid temporal integration through statistical learning and associative inference. Neuropsychologia 2024; 193:108755. [PMID: 38092332 DOI: 10.1016/j.neuropsychologia.2023.108755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 12/09/2023] [Indexed: 12/30/2023]
Abstract
INTRODUCTION The hippocampus (HPC) supports integration of information across time, often indexed by associative inference (AI) and statistical learning (SL) tasks. In AI, an indirect association between stimuli that never appeared together is inferred, whereas SL involves learning item relationships by extracting regularities across experiences. A recent model of hippocampal function (Schapiro et al., 2017) proposes that the HPC can support temporal integration in both paradigms through its two distinct pathways. METHODS We tested this models' predictions in four patients with varying degrees of bilateral HPC damage and matched healthy controls, with two patients with complementary damage to either the monosynaptic or trisynaptic pathway. During AI, participants studied overlapping paired associates (AB, BC) and their memory was tested for premise pairs (AB) and for inferred pairs (AC). During SL, participants passively viewed a continuous picture sequence that contained an underlying structure of triplets that later had to be recognized. RESULTS Binomial distributions were used to calculate above chance performance at the individual level. For AI, patients with focal HPC damage were impaired at inference but could correctly infer pairs above chance once premise pair acquisition was equated to controls; however, the patient with HPC and cortical damage showed severe impairment at recalling premise and inferred pairs, regardless of accounting for premise pair performance. For SL, none of the patients performed above chance, but notably neither did most controls. CONCLUSIONS Associative inference of indirect relationships can be intact with HPC damage to either hippocampal pathways or the HPC more broadly, provided premise pairs can first be formed. Inference may remain intact through residual HPC tissue supporting premise pair acquisition, and/or through extra-hippocampal structures supporting inference at retrieval. Clear conclusions about hippocampal contributions to SL are precluded by low performance in controls, which we caution is not dissimilar to previous amnesic studies using the same task. This complicates interpretations of studies claiming necessity of hippocampal contributions to SL and warrants the use of a common and reliable task before conclusions can be drawn.
Collapse
|
14
|
Does bilingual experience influence statistical language learning? Cognition 2024; 242:105639. [PMID: 37857053 DOI: 10.1016/j.cognition.2023.105639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 10/03/2023] [Accepted: 10/06/2023] [Indexed: 10/21/2023]
Abstract
Statistical language learning (SL) tasks measure different aspects of foreign language learning. Studies have used SL tasks to investigate whether bilingual experience confers advantages in acquiring additional languages through implicit processes. However, the results have been inconsistent, which may be related to bilingualism-related features (e.g., degree of dissimilarity between the specific language pair) and other variables such as specific processes that are targeted by the SL task. In the present study, we compared the performance of one Spanish monolingual and two bilingual (Spanish-Basque and Spanish-English) groups across three well-established SL tasks. Each task targeted a different aspect of foreign language learning; specifically, word segmentation, morphological rule generalization, and word-referent learning. In Experiment 1, we manipulated sub-lexical phonotactic patterns to vary the difficulty of three SL tasks, with the results showing no differences between the groups in word segmentation. In Experiment 2, we included non-adjacent dependencies to target affixal morphology rule learning, but again no group-related differences were found. In Experiment 3, we addressed word learning using an audio-visual SL task combining exclusive and multiple word-referent mappings, and found that bilinguals outperformed monolinguals, suggesting that bilingualism may exert influences on SL at the lexical level. This advantage might have been mediated by the high working memory demands required to perform the task. Summarizing, this study shows no evidence for a general bilingual advantage in SL, although bilinguals may outperform monolinguals under specific experimental conditions such as SL tasks that place high demands on working memory processes. In addition, the similar performance of Spanish-Basque and Spanish-English bilinguals across all three SL tasks suggests that the degree of dissimilarity between pairs of spoken languages does not modulate SL skills.
Collapse
|
15
|
Of words and whistles: Statistical learning operates similarly for identical sounds perceived as speech and non-speech. Cognition 2024; 242:105649. [PMID: 37871411 DOI: 10.1016/j.cognition.2023.105649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 10/11/2023] [Accepted: 10/13/2023] [Indexed: 10/25/2023]
Abstract
Statistical learning is an ability that allows individuals to effortlessly extract patterns from the environment, such as sound patterns in speech. Some prior evidence suggests that statistical learning operates more robustly for speech compared to non-speech stimuli, supporting the idea that humans are predisposed to learn language. However, any apparent statistical learning advantage for speech could be driven by signal acoustics, rather than the subjective perception per se of sounds as speech. To resolve this issue, the current study assessed whether there is a statistical learning advantage for ambiguous sounds that are subjectively perceived as speech-like compared to the same sounds perceived as non-speech, thereby controlling for acoustic features. We first induced participants to perceive sine-wave speech (SWS)-a degraded form of speech not immediately perceptible as speech-as either speech or non-speech. After this induction phase, participants were exposed to a continuous stream of repeating trisyllabic nonsense words, composed of SWS syllables, and then completed an explicit familiarity rating task and an implicit target detection task to assess learning. Critically, participants showed robust and equivalent performance on both measures, regardless of their subjective speech perception. In contrast, participants who perceived the SWS syllables as more speech-like showed better detection of individual syllables embedded in speech streams. These results suggest that speech perception facilitates processing of individual sounds, but not the ability to extract patterns across sounds. Our findings suggest that statistical learning is not influenced by the perceived linguistic relevance of sounds, and that it may be conceptualized largely as an automatic, stimulus-driven mechanism.
Collapse
|
16
|
Category Flexibility in Emotion Learning. AFFECTIVE SCIENCE 2023; 4:722-730. [PMID: 38156248 PMCID: PMC10751277 DOI: 10.1007/s42761-023-00192-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 05/12/2023] [Indexed: 12/30/2023]
Abstract
Learners flexibly update category boundaries to adjust to the range of experiences they encounter. However, little is known about whether the degree of flexibility is consistent across domains. We examined whether categorization of social input, specifically emotions, is afforded more flexibility as compared to other biological input. To address this question, children (6-12 years; 32 female, 37 male; 7 Hispanic or Latino, 62 not Hispanic or Latino; 8 Black or African American, 14 multiracial, 46 White, 1 selected "other") categorized faces morphed from calm to upset and animals morphed from a horse to a cow across task phases that differed in the distribution of stimuli presented. Learners flexibly adjusted both emotion and animal category boundaries according to distributional information, yet children showed more flexibility when updating their category boundaries for emotions. These results provide support for the idea that children-who must adjust to the vast and varied emotional signals of their social partners-respond to social signals dynamically in order to make predictions about the internal states and future behaviors of others.
Collapse
|
17
|
A multimodal cortical network of sensory expectation violation revealed by fMRI. Hum Brain Mapp 2023; 44:5871-5891. [PMID: 37721377 PMCID: PMC10619418 DOI: 10.1002/hbm.26482] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 07/04/2023] [Accepted: 08/29/2023] [Indexed: 09/19/2023] Open
Abstract
The brain is subjected to multi-modal sensory information in an environment governed by statistical dependencies. Mismatch responses (MMRs), classically recorded with EEG, have provided valuable insights into the brain's processing of regularities and the generation of corresponding sensory predictions. Only few studies allow for comparisons of MMRs across multiple modalities in a simultaneous sensory stream and their corresponding cross-modal context sensitivity remains unknown. Here, we used a tri-modal version of the roving stimulus paradigm in fMRI to elicit MMRs in the auditory, somatosensory and visual modality. Participants (N = 29) were simultaneously presented with sequences of low and high intensity stimuli in each of the three senses while actively observing the tri-modal input stream and occasionally reporting the intensity of the previous stimulus in a prompted modality. The sequences were based on a probabilistic model, defining transition probabilities such that, for each modality, stimuli were more likely to repeat (p = .825) than change (p = .175) and stimulus intensities were equiprobable (p = .5). Moreover, each transition was conditional on the configuration of the other two modalities comprising global (cross-modal) predictive properties of the sequences. We identified a shared mismatch network of modality general inferior frontal and temporo-parietal areas as well as sensory areas, where the connectivity (psychophysiological interaction) between these regions was modulated during mismatch processing. Further, we found deviant responses within the network to be modulated by local stimulus repetition, which suggests highly comparable processing of expectation violation across modalities. Moreover, hierarchically higher regions of the mismatch network in the temporo-parietal area around the intraparietal sulcus were identified to signal cross-modal expectation violation. With the consistency of MMRs across audition, somatosensation and vision, our study provides insights into a shared cortical network of uni- and multi-modal expectation violation in response to sequence regularities.
Collapse
|
18
|
Brain-imaging evidence for compression of binary sound sequences in human memory. eLife 2023; 12:e84376. [PMID: 37910588 PMCID: PMC10619979 DOI: 10.7554/elife.84376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 10/14/2023] [Indexed: 11/03/2023] Open
Abstract
According to the language-of-thought hypothesis, regular sequences are compressed in human memory using recursive loops akin to a mental program that predicts future items. We tested this theory by probing memory for 16-item sequences made of two sounds. We recorded brain activity with functional MRI and magneto-encephalography (MEG) while participants listened to a hierarchy of sequences of variable complexity, whose minimal description required transition probabilities, chunking, or nested structures. Occasional deviant sounds probed the participants' knowledge of the sequence. We predicted that task difficulty and brain activity would be proportional to the complexity derived from the minimal description length in our formal language. Furthermore, activity should increase with complexity for learned sequences, and decrease with complexity for deviants. These predictions were upheld in both fMRI and MEG, indicating that sequence predictions are highly dependent on sequence structure and become weaker and delayed as complexity increases. The proposed language recruited bilateral superior temporal, precentral, anterior intraparietal, and cerebellar cortices. These regions overlapped extensively with a localizer for mathematical calculation, and much less with spoken or written language processing. We propose that these areas collectively encode regular sequences as repetitions with variations and their recursive composition into nested structures.
Collapse
|
19
|
Concurrent visual sequence learning. PSYCHOLOGICAL RESEARCH 2023; 87:2086-2100. [PMID: 36947194 PMCID: PMC10457409 DOI: 10.1007/s00426-023-01810-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 02/15/2023] [Indexed: 03/23/2023]
Abstract
Many researchers in the field of implicit statistical learning agree that there does not exist one general implicit learning mechanism, but rather, that implicit learning takes place in highly specialized encapsulated modules. However, the exact representational content of these modules is still under debate. While there is ample evidence for a distinction between modalities (e.g., visual, auditory perception), the representational content of the modules might even be distinguished by features within the same modalities (e.g., location, color, and shape within the visual modality). In implicit sequence learning, there is evidence for the latter hypothesis, as a stimulus-color sequence can be learned concurrently with a stimulus-location sequence. Our aim was to test whether this also holds true for non-spatial features within the visual modality. This has been shown in artificial grammar learning, but not yet in implicit sequence learning. Hence, in Experiment 1, we replicated an artificial grammar learning experiment of Conway and Christiansen (2006) in which participants were supposed to learn color and shape grammars concurrently. In Experiment 2, we investigated concurrent learning of sequences with an implicit sequence learning paradigm: the serial reaction time task. Here, we found evidence for concurrent learning of two sequences, a color and shape sequence. Overall, the findings converge to the assumption that implicit learning might be based on features.
Collapse
|
20
|
Paired-associate versus cross-situational: How do verbal working memory and word familiarity affect word learning? Mem Cognit 2023; 51:1670-1682. [PMID: 37012500 DOI: 10.3758/s13421-023-01421-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/23/2023] [Indexed: 04/05/2023]
Abstract
Word learning is one of the first steps into language, and vocabulary knowledge predicts reading, speaking, and writing ability. There are several pathways to word learning and little is known about how they differ. Previous research has investigated paired-associate (PAL) and cross-situational word learning (CSWL) separately, limiting the understanding of how the learning process compares across the two. In PAL, the roles of word familiarity and working memory have been thoroughly examined, but these same factors have received very little attention in CSWL. We randomly assigned 126 monolingual adults to PAL or CSWL. In each task, names of 12 novel objects were learned (six familiar words, six unfamiliar words). Logistic mixed-effects models examined whether word-learning paradigm, word type and working memory (measured with a backward digit-span task) predicted learning. Results suggest better learning performance in PAL and on familiar words. Working memory predicted word learning across paradigms, but no interactions were found between any of the predictors. This suggests that PAL is easier than CSWL, likely because of reduced ambiguity between the word and the referent, but that learning across both paradigms is equally enhanced by word familiarity, and similarly supported by working memory.
Collapse
|
21
|
Learning to suppress a location is configuration-dependent. Atten Percept Psychophys 2023; 85:2170-2177. [PMID: 37258893 PMCID: PMC10584735 DOI: 10.3758/s13414-023-02732-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/11/2023] [Indexed: 06/02/2023]
Abstract
Where and what we attend is very much determined by what we have encountered in the past. Recent studies have shown that people learn to extract statistical regularities in the environment resulting in attentional suppression of locations that were likely to contain a distractor, effectively reducing the amount of attentional capture. Here, we asked whether this suppression effect due to statistical learning is dependent on the specific configuration within which it was learned. The current study employed the additional singleton paradigm using search arrays that had a configuration consisting of set sizes of either four or 10 items. Each configuration contained its own high probability distractor location. If learning would generalize across set size configurations, both high probability locations would be suppressed equally, regardless of set size. However, if learning to suppress is dependent on the configuration within which it was learned, one would expect only suppression of the high probability location that matched the configuration within which it was learned. The results show the latter, suggesting that implicitly learned suppression is configuration-dependent. Thus, we conclude that the high probability location is learned within the configuration context within which it is presented.
Collapse
|
22
|
Separate but not independent: Behavioral pattern separation and statistical learning are differentially affected by aging. Cognition 2023; 239:105564. [PMID: 37467624 DOI: 10.1016/j.cognition.2023.105564] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 06/23/2023] [Accepted: 07/11/2023] [Indexed: 07/21/2023]
Abstract
Our brains are capable of discriminating similar inputs (pattern separation) and rapidly generalizing across inputs (statistical learning). Are these two processes dissociable in behavior? Here, we asked whether cognitive aging affects them in a differential or parallel manner. Older and younger adults were tested on their ability to discriminate between similar trisyllabic words and to extract trisyllabic words embedded in a continuous speech stream. Older adults demonstrated intact statistical learning on an implicit, reaction time-based measure and an explicit, familiarity-based measure of learning. However, they performed poorly in discriminating similar items presented in isolation, both for episodically-encoded items and for statistically-learned regularities. These results indicate that pattern separation and statistical learning are dissociable and differentially affected by aging. The acquisition of implicit representations of statistical regularities operates robustly into old age, whereas pattern separation influences the expression of statistical learning with high representational fidelity and is subject to age-related decline.
Collapse
|
23
|
Multisensory perception constrains the formation of object categories: a review of evidence from sensory-driven and predictive processes on categorical decisions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220342. [PMID: 37545304 PMCID: PMC10404931 DOI: 10.1098/rstb.2022.0342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 06/29/2023] [Indexed: 08/08/2023] Open
Abstract
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the 'moo' sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
|
24
|
Intact procedural memory and impaired auditory statistical learning in adults with dyslexia. Neuropsychologia 2023; 188:108638. [PMID: 37516235 PMCID: PMC10805067 DOI: 10.1016/j.neuropsychologia.2023.108638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 05/08/2023] [Accepted: 07/03/2023] [Indexed: 07/31/2023]
Abstract
Developmental dyslexia is a reading disorder that is associated with atypical brain function. One neuropsychological theory posits that dyslexia reflects a deficit in the procedural memory system, which supports implicit learning, or the acquisition of knowledge without conscious awareness or intention. This study investigated various forms of procedural learning in adults with dyslexia and typically-reading adults. Adults with dyslexia exhibited typical skill learning on mirror tracing and rotary pursuit tasks that have been well-established as reflecting purely procedural memory and dependent on basal ganglia and cerebellar structures. They also exhibited typical statistical learning for visual material, but impaired statistical learning for auditory material. Auditory statistical learning proficiency correlated positively with single-word reading performance across all participants and within the group with dyslexia, linking a major difficulty in dyslexia with impaired auditory statistical learning. These findings dissociate multiple forms of procedural memory that are intact in dyslexia from a specific impairment in auditory statistical learning that is associated with reading difficulty.
Collapse
|
25
|
Pinging the brain to reveal the hidden attentional priority map using encephalography. Nat Commun 2023; 14:4749. [PMID: 37550310 PMCID: PMC10406833 DOI: 10.1038/s41467-023-40405-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 07/27/2023] [Indexed: 08/09/2023] Open
Abstract
Attention has been usefully thought of as organized in priority maps - putative maps of space where attentional priority is weighted across spatial regions in a winner-take-all competition for attentional deployment. Recent work has highlighted the influence of past experiences on the weighting of spatial priority - called selection history. Aside from being distinct from more well-studied, top-down forms of attentional enhancement, little is known about the neural substrates of history-mediated attentional priority. Using a task known to induce statistical learning of target distributions, in an EEG study we demonstrate that this otherwise invisible, latent attentional priority map can be visualized during the intertrial period using a 'pinging' technique in conjunction with multivariate pattern analyses. Our findings not only offer a method of visualizing the history-mediated attentional priority map, but also shed light on the underlying mechanisms allowing our past experiences to influence future behavior.
Collapse
|
26
|
The effect of load on spatial statistical learning. Sci Rep 2023; 13:11701. [PMID: 37474550 PMCID: PMC10359408 DOI: 10.1038/s41598-023-38404-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 07/07/2023] [Indexed: 07/22/2023] Open
Abstract
Statistical learning (SL), the extraction of regularities embedded in the environment, is often viewed as a fundamental and effortless process. However, whether spatial SL requires resources, or it can operate in parallel to other demands, is still not clear. To examine this issue, we tested spatial SL using the standard lab experiment under concurrent demands: high- and low-cognitive load (Experiment 1) and, spatial memory load (Experiment 2) during the familiarization phase. We found that any type of high-load demands during the familiarization abolished learning. Experiment 3 compared SL under spatial low-load and no-load. We found robust learning in the no-load condition that was dramatically reduced in the low-load condition. Finally, we compared a no-load condition with a very low-load, infrequent dot-probe condition that posed minimal demands while still requiring attention to the display (Experiment 4). The results showed, once again, that any concurrent task during the familiarization phase largely impaired spatial SL. Taken together, we conclude that spatial SL requires resources, a finding that challenges the view that the extraction of spatial regularities is automatic and implicit and suggests that this fundamental learning process is not as effortless as was typically assumed. We further discuss the practical and methodological implications of these findings.
Collapse
|
27
|
Implicit and explicit learning of socio-emotional information in a dynamic interaction with a virtual avatar. PSYCHOLOGICAL RESEARCH 2023; 87:1057-1074. [PMID: 36036291 PMCID: PMC10191928 DOI: 10.1007/s00426-022-01709-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 06/27/2022] [Indexed: 10/15/2022]
Abstract
Implicit learning (IL) deals with the non-conscious acquisition of structural regularities from the environment. IL is often deemed essential for acquiring regularities followed by social stimuli (e.g., other persons' behavior), hence is hypothesized to play a role in typical social functioning. However, our understanding of how this process might operate in social contexts is limited for two main reasons. First, while IL is highly sensitive to the characteristics of the surface stimuli upon which it operates, most IL studies have used surface stimuli with limited social validity (e.g., letters, symbols, etc.). Second, while the social environment is dynamic (i.e., our behaviors and reactions influence those of our social partners and vice-versa), the bulk of IL research employed noninteractive paradigms. Using a novel task, we examine whether IL is involved in the acquisition of regularities from a dynamic interaction with a realistic real-life-like agent. Participants (N = 115) interacted with a cinematic avatar that displayed different facial expressions. Their task was to regulate the avatar's expression to a specified level. Unbeknownst to them, an equation mediated the relationship between their responses and the avatar's expressions. Learning occurred in the task, as participants gradually increased their ability to bring the avatar in the target state. Subjective measures of awareness revealed that participants acquired both implicit and explicit knowledge from the task. This is the first study to show that IL operates in interactive situations upon socially relevant surface stimuli, facilitating future investigations of the role that IL plays in (a)typical social functioning.
Collapse
|
28
|
Crossmodal interactions in human learning and memory. Front Hum Neurosci 2023; 17:1181760. [PMID: 37266327 PMCID: PMC10229776 DOI: 10.3389/fnhum.2023.1181760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 05/02/2023] [Indexed: 06/03/2023] Open
Abstract
Most studies of memory and perceptual learning in humans have employed unisensory settings to simplify the study paradigm. However, in daily life we are often surrounded by complex and cluttered scenes made up of many objects and sources of sensory stimulation. Our experiences are, therefore, highly multisensory both when passively observing the world and when acting and navigating. We argue that human learning and memory systems are evolved to operate under these multisensory and dynamic conditions. The nervous system exploits the rich array of sensory inputs in this process, is sensitive to the relationship between the sensory inputs, and continuously updates sensory representations, and encodes memory traces based on the relationship between the senses. We review some recent findings that demonstrate a range of human learning and memory phenomena in which the interactions between visual and auditory modalities play an important role, and suggest possible neural mechanisms that can underlie some surprising recent findings. We outline open questions as well as directions of future research to unravel human perceptual learning and memory.
Collapse
|
29
|
Reading fluency and statistical learning across modalities and domains: Online and offline measures. PLoS One 2023; 18:e0281788. [PMID: 36952465 PMCID: PMC10035921 DOI: 10.1371/journal.pone.0281788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 02/01/2023] [Indexed: 03/25/2023] Open
Abstract
The vulnerability of statistical learning has been demonstrated in reading difficulties in both the visual and acoustic modalities. We examined segmentation abilities of Hungarian speaking adolescents with different levels of reading fluency in the acoustic verbal and visual nonverbal domains. We applied online target detection tasks, where the extent of learning is reflected in differences between reaction times to predictable versus unpredictable targets. Explicit judgments of well-formedness were also elicited in an offline two-alternative forced choice (2AFC) task. Learning was evident in both the acoustic verbal and visual nonverbal tasks, both in online and offline measures, but learning effects were larger both in online and offline tasks in the verbal acoustic condition. We haven’t found evidence for a significant relationship between statistical learning and reading fluency in adolescents in either modality. Together with earlier findings, these results suggest that the relationship between reading and statistical learning is dependent on the domain, modality and nature of the statistical learning task, on the reading task, on the age of participants, and on the specific language. The online target detection task is a promising tool which can be adapted to a wider set of tasks to further explore the contribution of statistical learning to reading acquisition in participants from different populations.
Collapse
|
30
|
Statistical Learning Within Objects. Psychol Sci 2023; 34:501-511. [PMID: 36882101 DOI: 10.1177/09567976231154804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023] Open
Abstract
Research has recently shown that efficient selection relies on the implicit extraction of environmental regularities, known as statistical learning. Although this has been demonstrated for scenes, similar learning arguably also occurs for objects. To test this, we developed a paradigm that allowed us to track attentional priority at specific object locations irrespective of the object's orientation in three experiments with young adults (all Ns = 80). Experiments 1a and 1b established within-object statistical learning by demonstrating increased attentional priority at relevant object parts (e.g., hammerhead). Experiment 2 extended this finding by demonstrating that learned priority generalized to viewpoints in which learning never took place. Together, these findings demonstrate that as a function of statistical learning, the visual system not only is able to tune attention relative to specific locations in space but also can develop preferential biases for specific parts of an object independently of the viewpoint of that object.
Collapse
|
31
|
Aging effects and feasibility of statistical learning tasks across modalities. NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2023; 30:201-230. [PMID: 34823443 DOI: 10.1080/13825585.2021.2007213] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Knowledge on statistical learning (SL) in healthy elderly is scarce. Theoretically, it is not clear whether aging affects modality-specific and/or domain-general learning mechanisms. Practically, there is a lack of research on simplified SL tasks, which would ease the burden of testing in clinical populations. Against this background, we conducted two experiments across three modalities (auditory, visual and visuomotor) in a total of 93 younger and older adults. In Experiment 1, SL was induced in all modalities. Aging effects appeared in the tasks relying on an explicit posttest to assess SL. We hypothesize that declines in domain-general processes that predominantly modulate explicit learning mechanisms underlie these aging effects. In Experiment 2, more feasible tasks were developed for which the level of SL was maintained in all modalities, except the auditory modality. These tasks are more likely to successfully measure SL in elderly (patient) populations in which task demands can be problematic.
Collapse
|
32
|
No reliable effect of task-irrelevant cross-modal statistical regularities on distractor suppression. Cortex 2023; 161:77-92. [PMID: 36913824 DOI: 10.1016/j.cortex.2023.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 02/06/2023] [Indexed: 02/23/2023]
Abstract
Our sensory systems are known to extract and utilize statistical regularities in sensory inputs across space and time for efficient perceptual processing. Past research has shown that participants can utilize statistical regularities of target and distractor stimuli independently within a modality either to enhance the target or to suppress the distractor processing. Utilizing statistical regularities of task-irrelevant stimuli across different modalities also enhances target processing. However, it is not known whether distractor processing can also be suppressed by utilizing statistical regularities of task-irrelevant stimulus of different modalities. In the present study, we investigated whether the spatial (Experiment 1) and non-spatial (Experiment 2) statistical regularities of task-irrelevant auditory stimulus could suppress the salient visual distractor. We used an additional singleton visual search task with two high-probability colour singleton distractor locations. Critically, the spatial location of the high-probability distractor was either predictive (valid trials) or unpredictive (invalid trials) based on the statistical regularities of the task-irrelevant auditory stimulus. The results replicated earlier findings of distractor suppression at high-probability locations compared to the locations where distractors appear with lower probability. However, the results did not show any RT advantage for valid distractor location trials as compared with invalid distractor location trials in both experiments. When tested on whether participants can express awareness of the relationship between specific auditory stimulus and the distractor location, they showed explicit awareness only in Experiment 1. However, an exploratory analysis suggested a possibility of response biases at the awareness testing phase of Experiment 1. Overall, results indicate that irrespective of awareness of the relationship between auditory stimulus and distractor location regularities, there was no reliable influence of task-irrelevant auditory stimulus regularities on distractor suppression.
Collapse
|
33
|
Dissociation Between Linguistic and Nonlinguistic Statistical Learning in Children with Autism. J Autism Dev Disord 2023:10.1007/s10803-023-05902-1. [PMID: 36749457 PMCID: PMC10404646 DOI: 10.1007/s10803-023-05902-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/11/2023] [Indexed: 02/08/2023]
Abstract
Statistical learning (SL), the ability to detect and extract regularities from inputs, is considered a domain-general building block for typical language development. We compared 55 verbal children with autism (ASD, 6-12 years) and 50 typically-developing children in four SL tasks. The ASD group exhibited reduced learning in the linguistic SL tasks (syllable and letter), but showed intact learning for the nonlinguistic SL tasks (tone and image). In the ASD group, better linguistic SL was associated with higher language skills measured by parental report and sentence recall. Therefore, the atypicality of SL in autism is not domain-general but tied to specific processing constraints related to verbal stimuli. Our findings provide a novel perspective for understanding language heterogeneity in autism.
Collapse
|
34
|
Can adults with developmental dyslexia apply statistical knowledge to a new context? Cogn Process 2023; 24:129-145. [PMID: 36344856 DOI: 10.1007/s10339-022-01106-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Accepted: 07/18/2022] [Indexed: 11/09/2022]
Abstract
We investigated transfer of artificial grammar learning in adults with and without dyslexia in 3 experiments. In Experiment 1, participants implicitly learned an artificial grammar system and were tested on new items that included the same symbols. In Experiment 2, participants were given practice with letter strings and then tested on strings created with a different letter set. In Experiment 3, participants were given practice with shapes and then tested on strings created with different shapes. Results show that in Experiment 1, both groups demonstrated utilization of pre-trained instances in the subsequent grammaticality judgement task, while in Experiments 2 (orthographic) and 3 (nonorthographic), only typically developed participants demonstrated application of knowledge from training to test. A post hoc analysis comparing between the experiments suggests that being trained and tested on an orthographic task leads to better performance than a nonorthographic task among typically developed adults but not among adults with dyslexia. Taken together, it appears that following extensive training, individuals with dyslexia are able to form stable representations from sequential stimuli and use them in a subsequent task that utilizes strings of similar symbols. However, the manipulation of the symbols challenges this ability.
Collapse
|
35
|
Incidental auditory category learning and visuomotor sequence learning do not compete for cognitive resources. Atten Percept Psychophys 2023; 85:452-462. [PMID: 36510102 DOI: 10.3758/s13414-022-02616-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2022] [Indexed: 12/15/2022]
Abstract
The environment provides multiple regularities that might be useful in guiding behavior if one was able to learn their structure. Understanding statistical learning across simultaneous regularities is important, but poorly understood. We investigate learning across two domains: visuomotor sequence learning through the serial reaction time (SRT) task, and incidental auditory category learning via the systematic multimodal association reaction time (SMART) task. Several commonalities raise the possibility that these two learning phenomena may draw on common cognitive resources and neural networks. In each, participants are uninformed of the regularities that they come to use to guide actions, the outcomes of which may provide a form of internal feedback. We used dual-task conditions to compare learning of the regularities in isolation versus when they are simultaneously available to support behavior on a seemingly orthogonal visuomotor task. Learning occurred across the simultaneous regularities, without attenuation even when the informational value of a regularity was reduced by the presence of the additional, convergent regularity. Thus, the simultaneous regularities do not compete for associative strength, as in overshadowing effects. Moreover, the visuomotor sequence learning and incidental auditory category learning do not appear to compete for common cognitive resources; learning across the simultaneous regularities was comparable to learning each regularity in isolation.
Collapse
|
36
|
Confidence of probabilistic predictions modulates the cortical response to pain. Proc Natl Acad Sci U S A 2023; 120:e2212252120. [PMID: 36669115 PMCID: PMC9942789 DOI: 10.1073/pnas.2212252120] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 11/21/2022] [Indexed: 01/21/2023] Open
Abstract
Pain typically evolves over time, and the brain needs to learn this temporal evolution to predict how pain is likely to change in the future and orient behavior. This process is termed temporal statistical learning (TSL). Recently, it has been shown that TSL for pain sequences can be achieved using optimal Bayesian inference, which is encoded in somatosensory processing regions. Here, we investigate whether the confidence of these probabilistic predictions modulates the EEG response to noxious stimuli, using a TSL task. Confidence measures the uncertainty about the probabilistic prediction, irrespective of its actual outcome. Bayesian models dictate that the confidence about probabilistic predictions should be integrated with incoming inputs and weight learning, such that it modulates the early components of the EEG responses to noxious stimuli, and this should be captured by a negative correlation: when confidence is higher, the early neural responses are smaller as the brain relies more on expectations/predictions and less on sensory inputs (and vice versa). We show that participants were able to predict the sequence transition probabilities using Bayesian inference, with some forgetting. Then, we find that the confidence of these probabilistic predictions was negatively associated with the amplitude of the N2 and P2 components of the vertex potential: the more confident were participants about their predictions, the smaller the vertex potential. These results confirm key predictions of a Bayesian learning model and clarify the functional significance of the early EEG responses to nociceptive stimuli, as being implicated in confidence-weighted statistical learning.
Collapse
|
37
|
Parallel Acquisition of Uncorrelated Sequences does Not Provide Firm Evidence for a Modular Sequence-Learning System. J Cogn 2023; 6:12. [PMID: 36721800 PMCID: PMC9854281 DOI: 10.5334/joc.258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 12/22/2022] [Indexed: 01/19/2023] Open
Abstract
Dual-systems theories of sequence learning assume that sequence learning may proceed within a unidimensional learning system that is immune to cross-dimensional interference because information is processed and represented in dimension-specific, encapsulated modules. Important evidence for such modularity comes from studies investigating the absence or presence of interference between multiple uncorrelated sequences (e.g., a sequence of color stimuli and a sequence of motor keypresses). Here we question the premise that the parallel acquisition of uncorrelated sequences provides convincing evidence for a modularized learning system. In contrast, we demonstrate that parallel acquisition of multiple uncorrelated sequences is well predicted by a computational model that assumes a single learning system with joint representations of stimulus and response features.
Collapse
|
38
|
No need to forget, just keep the balance: Hebbian neural networks for statistical learning. Cognition 2023; 230:105176. [PMID: 36442955 DOI: 10.1016/j.cognition.2022.105176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 04/15/2022] [Accepted: 05/16/2022] [Indexed: 11/27/2022]
Abstract
Language processing in humans has long been proposed to rely on sophisticated learning abilities including statistical learning. Endress and Johnson (E&J, 2021) recently presented a neural network model for statistical learning based on Hebbian learning principles. This model accounts for word segmentation tasks, one primary paradigm in statistical learning. In this discussion paper we review this model and compare it with the Hebbian model previously presented by Tovar and Westermann (T&W, 2017a; 2017b; 2018) that has accounted for serial reaction time tasks, cross-situational learning, and categorization paradigms, all relevant in the study of statistical learning. We discuss the similarities and differences between both models, and their key findings. From our analysis, we question the concept of "forgetting" in the model of E&J and their suggestion of considering forgetting as the critical ingredient for successful statistical learning. We instead suggest that a set of simple but well-balanced mechanisms including spreading activation, activation persistence, and synaptic weight decay, all based on biologically grounded principles, allow modeling statistical learning in Hebbian neural networks, as demonstrated in the T&W model which successfully covers learning of nonadjacent dependencies and accounts for differences between typical and atypical populations, both aspects that have not been fully demonstrated in the E&J model. We outline the main computational and theoretical differences between the E&J and T&W approaches, present new simulation results, and discuss implications for the development of a computational cognitive theory of statistical learning.
Collapse
|
39
|
Modality, presentation, domain and training effects in statistical learning. Sci Rep 2022; 12:20878. [PMID: 36463280 PMCID: PMC9719496 DOI: 10.1038/s41598-022-24951-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 11/22/2022] [Indexed: 12/07/2022] Open
Abstract
While several studies suggest that the nature and properties of the input have significant effects on statistical learning, they have rarely been investigated systematically. In order to understand how input characteristics and their interactions impact statistical learning, we explored the effects of modality (auditory vs. visual), presentation type (serial vs. simultaneous), domain (linguistic vs. non-linguistic), and training type (random, starting small, starting big) on artificial grammar learning in young adults (N = 360). With serial presentation of stimuli, learning was more effective in the auditory than in the visual modality. However, with simultaneous presentation of visual and serial presentation of auditory stimuli, the modality effect was not present. We found a significant domain effect as well: a linguistic advantage over nonlinguistic material, which was driven by the domain effect in the auditory modality. Overall, the auditory linguistic condition had an advantage over other modality-domain types. Training types did not have any overall effect on learning; starting big enhanced performance only in the case of serial visual presentation. These results show that input characteristics such as modality, presentation type, domain and training type influence statistical learning, and suggest that their effects are also dependent on the specific stimuli and structure to be learned.
Collapse
|
40
|
Computational and neural mechanisms of statistical pain learning. Nat Commun 2022; 13:6613. [PMID: 36329014 PMCID: PMC9633765 DOI: 10.1038/s41467-022-34283-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 10/11/2022] [Indexed: 11/06/2022] Open
Abstract
Pain invariably changes over time. These fluctuations contain statistical regularities which, in theory, could be learned by the brain to generate expectations and control responses. We demonstrate that humans learn to extract these regularities and explicitly predict the likelihood of forthcoming pain intensities in a manner consistent with optimal Bayesian inference with dynamic update of beliefs. Healthy participants received probabilistic, volatile sequences of low and high-intensity electrical stimuli to the hand during brain fMRI. The inferred frequency of pain correlated with activity in sensorimotor cortical regions and dorsal striatum, whereas the uncertainty of these inferences was encoded in the right superior parietal cortex. Unexpected changes in stimulus frequencies drove the update of internal models by engaging premotor, prefrontal and posterior parietal regions. This study extends our understanding of sensory processing of pain to include the generation of Bayesian internal models of the temporal statistics of pain.
Collapse
|
41
|
The role of the hippocampus in statistical learning and language recovery in persons with post stroke aphasia. Neuroimage Clin 2022; 36:103243. [PMID: 36306718 PMCID: PMC9668653 DOI: 10.1016/j.nicl.2022.103243] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 10/17/2022] [Accepted: 10/19/2022] [Indexed: 11/11/2022]
Abstract
Although several studies have aimed for accurate predictions of language recovery in post stroke aphasia, individual language outcomes remain hard to predict. Large-scale prediction models are built using data from patients mainly in the chronic phase after stroke, although it is clinically more relevant to consider data from the acute phase. Previous research has mainly focused on deficits, i.e., behavioral deficits or specific brain damage, rather than compensatory mechanisms, i.e., intact cognitive skills or undamaged brain regions. One such unexplored brain region that might support language (re)learning in aphasia is the hippocampus, a region that has commonly been associated with an individual's learning potential, including statistical learning. This refers to a set of mechanisms upon which we rely heavily in daily life to learn a range of regularities across cognitive domains. Against this background, thirty-three patients with aphasia (22 males and 11 females, M = 69.76 years, SD = 10.57 years) were followed for 1 year in the acute (1-2 weeks), subacute (3-6 months) and chronic phase (9-12 months) post stroke. We evaluated the unique predictive value of early structural hippocampal measures for short-term and long-term language outcomes (measured by the ANELT). In addition, we investigated whether statistical learning abilities were intact in patients with aphasia using three different tasks: an auditory-linguistic and visual task based on the computation of transitional probabilities and a visuomotor serial reaction time task. Finally, we examined the association of individuals' statistical learning potential with acute measures of hippocampal gray and white matter. Using Bayesian statistics, we found moderate evidence for the contribution of left hippocampal gray matter in the acute phase to the prediction of long-term language outcomes, over and above information on the lesion and the initial language deficit (measured by the ScreeLing). Non-linguistic statistical learning in patients with aphasia, measured in the subacute phase, was intact at the group level compared to 23 healthy older controls (8 males and 15 females, M = 74.09 years, SD = 6.76 years). Visuomotor statistical learning correlated with acute hippocampal gray and white matter. These findings reveal that particularly left hippocampal gray matter in the acute phase is a potential marker of language recovery after stroke, possibly through its statistical learning ability.
Collapse
|
42
|
Learning words without trying: Daily second language podcasts support word-form learning in adults. Psychon Bull Rev 2022; 30:751-762. [PMID: 36175820 DOI: 10.3758/s13423-022-02190-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/12/2022] [Indexed: 11/08/2022]
Abstract
Spoken language contains overlapping patterns across different levels, from syllables to words to phrases. The discovery of these structures may be partially supported by statistical learning (SL), the unguided, automatic extraction of regularities from the environment through passive exposure. SL supports word learning in artificial language experiments, but few studies have examined whether it scales up to support natural language learning in adult second language learners. Here, adult English speakers (n = 70) listened to daily podcasts in either Italian or English for 2 weeks while going about their normal routines. To measure word knowledge, participants provided familiarity ratings of Italian words and nonwords both before and after the listening period. Critically, compared with English controls, Italian listeners significantly improved in their ability to discriminate Italian words and nonwords. These results suggest that unguided exposure to natural, foreign language speech supports the extraction of relevant word features and the development of nascent word forms. At a theoretical level, these findings indicate that SL may effectively scale up to support real-world language acquisition. These results also have important practical implications, suggesting that adult learners may be able to acquire relevant speech patterns and initial word forms simply by listening to the language. This form of learning can occur without explicit effort, formal instruction or focused study.
Collapse
|
43
|
Statistical Learning of Language: A Meta-Analysis Into 25 Years of Research. Cogn Sci 2022; 46:e13198. [PMID: 36121309 DOI: 10.1111/cogs.13198] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 08/16/2022] [Accepted: 08/22/2022] [Indexed: 11/29/2022]
Abstract
Statistical learning is a key concept in our understanding of language acquisition. Ample work has highlighted its role in numerous linguistic functions-yet statistical learning is not a unitary construct, and its consistency across different language properties remains unclear. In a meta-analysis of auditory-linguistic statistical learning research spanning the last 25 years, we evaluated how learning varies across different language properties in infants, children, and adults and surveyed the methodological trends in the literature. We found robust learning across stimuli (syllables, words, etc.) in infants, and across stimuli and structures (adjacent dependencies, non-adjacent dependencies, etc.) in adults, with larger effect sizes when multiple cues were present. However, the analysis also showed significant publication bias and revealed a tendency toward using a narrow range of simplified language properties, including in the strength of the transitional probabilities used during training. Bayes factor analyses revealed prevalent data insensitivity of moderators commonly hypothesized to impact learning, such as the amount of exposure and transitional probability strength, which contradict core theoretical assumptions in the field. Methodological factors, such as the tasks used at test, also significantly impacted effect sizes in adults and children, suggesting that choice of task may critically constrain current theories of how statistical learning operates. Collectively, our results suggest that auditory-linguistic statistical learning has the kind of robustness needed to play a foundational role in language acquisition, but that more research is warranted to reveal its full potential.
Collapse
|
44
|
Abstract representations of small sets in newborns. Cognition 2022; 226:105184. [PMID: 35671541 PMCID: PMC9289748 DOI: 10.1016/j.cognition.2022.105184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 03/22/2022] [Accepted: 05/26/2022] [Indexed: 11/21/2022]
Abstract
From the very first days of life, newborns are not tied to represent narrow, modality- and object-specific aspects of their environment. Rather, they sometimes react to abstract properties shared by stimuli of very different nature, such as approximate numerosity or magnitude. As of now, however, there is no evidence that newborns possess abstract representations that apply to small sets: in particular, while newborns can match large approximate numerosities across senses, this ability does not extend to small numerosities. In two experiments, we presented newborn infants (N = 64, age 17 to 98 h) with patterned sets AB or ABB simultaneously in the auditory and visual modalities. Auditory patterns were presented as periodic sequences of sounds (AB: triangle-drum-triangle-drum-triangle-drum …; ABB: triangle-drum-drum-triangle-drum-drum-triangle-drum-drum …), and visual patterns as arrays of 2 or 3 shapes (AB: circle-diamond; ABB: circle-diamond-diamond). In both experiments, we found that participants reacted and looked longer when the patterns matched across the auditory and visual modalities – provided that the first stimulus they received was congruent. These findings uncover the existence of yet another type of abstract representations at birth, applying to small sets. As such, they bolster the hypothesis that newborns are endowed with the capacity to represent their environment in broad strokes, in terms of its most abstract properties. This capacity for abstraction could later serve as a scaffold for infants to learn about the particular entities surrounding them. Newborns were presented with auditory and visual patterns (AB vs. ABB). Participants reacted when the patterns presented were congruent across modalities. Newborns possess abstract representations applying to small sets. These representations may encode numerosity and/or repetitions.
Collapse
|
45
|
Emotional Faces Facilitate Statistical Learning. AFFECTIVE SCIENCE 2022; 3:662-672. [PMID: 36385906 PMCID: PMC9537398 DOI: 10.1007/s42761-022-00130-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 06/07/2022] [Indexed: 06/16/2023]
Abstract
Detecting regularities and extracting patterns is a vital skill to organize complex information in our environments. Statistical learning, a process where we detect regularities by attending to relationships between cues in our environment, contributes to knowledge acquisition across myriad domains. However, less is known about how emotional cues-specifically facial configurations of emotion-influence statistical learning. Here, we tested two pre-registered aims to advance knowledge about emotional signals and statistical learning: (1) we examined statistical learning in the context of emotional compared to non-emotional information, and (2) we assessed how emotional congruency (i.e., whether facial stimuli conveyed the same, or different emotions) influenced regularity extraction. We demonstrated statistical learning in the context of emotional signals. Further, we showed that statistical learning occurs more efficiently in the context of emotional faces. We also established that congruent cues benefited an online measure of statistical learning, but had varied effects when statistical learning was assessed via post-exposure recognition test. The results shed light on how affective signals influence well-studied cognitive skills and address a knowledge gap about how cue congruency impacts statistical learning, including how emotional cues might guide predictions in our social world. Supplementary Information The online version contains supplementary material available at 10.1007/s42761-022-00130-9.
Collapse
|
46
|
Language learning in aphasia: A narrative review and critical analysis of the literature with implications for language therapy. Neurosci Biobehav Rev 2022; 141:104825. [PMID: 35963544 DOI: 10.1016/j.neubiorev.2022.104825] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 08/07/2022] [Accepted: 08/09/2022] [Indexed: 11/24/2022]
Abstract
People with aphasia (PWA) present with language deficits including word retrieval difficulties after brain damage. Language learning is an essential life-long human capacity that may support treatment-induced language recovery after brain insult. This prospect has motivated a growing interest in the study of language learning in PWA during the last few decades. Here, we critically review the current literature on language learning ability in aphasia. The existing studies in this area indicate that (i) language learning can remain functional in some PWA, (ii) inter-individual variability in learning performance is large in PWA, (iii) language processing, short-term memory and lesion site are associated with learning ability, (iv) preliminary evidence suggests a relationship between learning ability and treatment outcomes in this population. Based on the reviewed evidence, we propose a potential account for the interplay between language and memory/learning systems to explain spared/impaired language learning and its relationship to language therapy in PWA. Finally, we indicate potential avenues for future research that may promote more cross-talk between cognitive neuroscience and aphasia rehabilitation.
Collapse
|
47
|
The Effects of Cooperative and Competitive Situations on Statistical Learning. Brain Sci 2022; 12:brainsci12081059. [PMID: 36009122 PMCID: PMC9405654 DOI: 10.3390/brainsci12081059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Revised: 08/05/2022] [Accepted: 08/08/2022] [Indexed: 11/16/2022] Open
Abstract
Devising cooperative or competitive situations is an important teaching strategy in educational practices. Nevertheless, there is still controversy regarding which situation is better for learning. This study was conducted to explore the effects of cooperative and competitive situations on statistical learning, through the alternating serial reaction time (ASRT) task. Individual cooperative and competitive situations were devised in this study, in which individual situation served as the control condition. Ninety recruited participants were randomly assigned to a cooperative, competitive, or individual group to perform the ASRT task. For general learning, cooperative and competitive situations could indeed make learners respond faster, and there was no significant difference in the RT between the cooperative and competitive groups. Moreover, statistical learning was observed in all three groups. An additional analysis of the early stage of the experiment showed that the learning effect of the competitive group was greater than those of the cooperative and individual groups, in terms of statistical learning. However, the final learning effect was not significantly different among the three groups. Overall, the cooperative and competitive situations had a positive impact on learning and enabled the students to acquire approximately the same learning effect in a shorter time period, compared with the individual situation. Specifically, the competitive situation accelerated the statistical learning process but not the general learning process.
Collapse
|
48
|
Detecting non-adjacent dependencies is the exception rather than the rule. PLoS One 2022; 17:e0270580. [PMID: 35834512 PMCID: PMC9282578 DOI: 10.1371/journal.pone.0270580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 06/14/2022] [Indexed: 11/24/2022] Open
Abstract
Statistical learning refers to our sensitivity to the distributional properties of our environment. Humans have been shown to readily detect the dependency relationship of events that occur adjacently in a stream of stimuli but processing non-adjacent dependencies (NADs) appears more challenging. In the present study, we tested the ability of human participants to detect NADs in a new Hebb-naming task that has been proposed recently to study regularity detection in a noisy environment. In three experiments, we found that most participants did not manage to extract NADs. These results suggest that the ability to learn NADs in noise is the exception rather than the rule. They provide new information about the limits of statistical learning mechanisms.
Collapse
|
49
|
What to expect where and when: how statistical learning drives visual selection. Trends Cogn Sci 2022; 26:860-872. [PMID: 35840476 DOI: 10.1016/j.tics.2022.06.001] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 05/30/2022] [Accepted: 06/02/2022] [Indexed: 12/26/2022]
Abstract
While the visual environment contains massive amounts of information, we should not and cannot pay attention to all events. Instead, we need to direct attention to those events that have proven to be important in the past and suppress those that were distracting and irrelevant. Experiences molded through a learning process enable us to extract and adapt to the statistical regularities in the world. While previous studies have shown that visual statistical learning (VSL) is critical for representing higher order units of perception, here we review the role of VSL in attentional selection. Evidence suggests that through VSL, attentional priority settings are optimally adjusted to regularities in the environment, without intention and without conscious awareness.
Collapse
|
50
|
Acquiring Complex Communicative Systems: Statistical Learning of Language and Emotion. Top Cogn Sci 2022; 14:432-450. [PMID: 35398974 PMCID: PMC9465951 DOI: 10.1111/tops.12612] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2011] [Revised: 03/16/2022] [Accepted: 03/17/2022] [Indexed: 11/30/2022]
Abstract
During the early postnatal years, most infants rapidly learn to understand two naturally evolved communication systems: language and emotion. While these two domains include different types of content knowledge, it is possible that similar learning processes subserve their acquisition. In this review, we compare the learnable statistical regularities in language and emotion input. We then consider how domain-general learning abilities may underly the acquisition of language and emotion, and how this process may be constrained in each domain. This comparative developmental approach can advance our understanding of how humans learn to communicate with others.
Collapse
|