1
|
Natural language syntax complies with the free-energy principle. SYNTHESE 2024; 203:154. [PMID: 38706520 PMCID: PMC11068586 DOI: 10.1007/s11229-024-04566-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Accepted: 03/15/2024] [Indexed: 05/07/2024]
Abstract
Natural language syntax yields an unbounded array of hierarchically structured expressions. We claim that these are used in the service of active inference in accord with the free-energy principle (FEP). While conceptual advances alongside modelling and simulation work have attempted to connect speech segmentation and linguistic communication with the FEP, we extend this program to the underlying computations responsible for generating syntactic objects. We argue that recently proposed principles of economy in language design-such as "minimal search" criteria from theoretical syntax-adhere to the FEP. This affords a greater degree of explanatory power to the FEP-with respect to higher language functions-and offers linguistics a grounding in first principles with respect to computability. While we mostly focus on building new principled conceptual relations between syntax and the FEP, we also show through a sample of preliminary examples how both tree-geometric depth and a Kolmogorov complexity estimate (recruiting a Lempel-Ziv compression algorithm) can be used to accurately predict legal operations on syntactic workspaces, directly in line with formulations of variational free energy minimization. This is used to motivate a general principle of language design that we term Turing-Chomsky Compression (TCC). We use TCC to align concerns of linguists with the normative account of self-organization furnished by the FEP, by marshalling evidence from theoretical linguistics and psycholinguistics to ground core principles of efficient syntactic computation within active inference.
Collapse
|
2
|
Disintegration at the Syntax-Semantics Interface in Prodromal Alzheimer's Disease: New Evidence from Complex Sentence Anaphora in Amnestic Mild Cognitive Impairment (aMCI). JOURNAL OF NEUROLINGUISTICS 2024; 70:101190. [PMID: 38370310 PMCID: PMC10871704 DOI: 10.1016/j.jneuroling.2023.101190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
Although diverse language deficits have been widely observed in prodromal Alzheimer's disease (AD), the underlying nature of such deficits and their explanation remains opaque. Consequently, both clinical applications and brain-language models are not well-defined. In this paper we report results from two experiments which test language production in a group of individuals with amnestic Mild Cognitive Impairment (aMCI) in contrast to healthy aging and healthy young. The experiments apply factorial designs informed by linguistic analysis to test two forms of complex sentences involving anaphora (relations between pronouns and their antecedents). Results show that aMCI individuals differentiate forms of anaphora depending on sentence structure, with selective impairment of sentences which involve construal with reference to context (anaphoric coreference). We argue that aMCI individuals maintain core structural knowledge while evidencing deficiency in syntax-semantics integration, thus locating the source of the deficit in the language-thought interface of the Language Faculty.
Collapse
|
3
|
Cognitive Computational Neuroscience of Language: Using Computational Models to Investigate Language Processing in the Brain. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:1-6. [PMID: 38645621 PMCID: PMC11025655 DOI: 10.1162/nol_e_00131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
|
4
|
Infant-directed speech facilitates word learning through attentional mechanisms: An fNIRS study of toddlers. Dev Sci 2024; 27:e13424. [PMID: 37322865 DOI: 10.1111/desc.13424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 05/27/2023] [Accepted: 05/30/2023] [Indexed: 06/17/2023]
Abstract
The speech register that adults especially caregivers use when interacting with infants and toddlers, that is, infant-directed speech (IDS) or baby talk, has been reported to facilitate language development throughout the early years. However, the neural mechanisms as well as why IDS results in such a developmental faciliatory effect remain to be investigated. The current study uses functional near-infrared spectroscopy (fNIRS) to evaluate two alternative hypotheses of such a facilitative effect, that IDS serves to enhance linguistic contrastiveness or to attract the child's attention. Behavioral and fNIRS data were acquired from twenty-seven Cantonese-learning toddlers 15-20 months of age when their parents spoke to them in either an IDS or adult-directed speech (ADS) register in a naturalistic task in which the child learned four disyllabic pseudowords. fNIRS results showed significantly greater neural responses to IDS than ADS register in the left dorsolateral prefrontal cortex (L-dlPFC), but opposite response patterns in the bilateral inferior frontal gyrus (IFG). The differences in fNIRS responses to IDS and to ADS in the L-dlPFC and the left parietal cortex (L-PC) showed significantly positive correlations with the differences in the behavioral word-learning performance of toddlers. The same fNIRS measures in the L-dlPFC and right PC (R-PC) of toddlers were significantly correlated with pitch range differences of parents between the two speech conditions. Together, our results suggest that the dynamic prosody in IDS increased toddlers' attention through greater involvement of the left frontoparietal network that facilitated word learning, compared to ADS. RESEARCH HIGHLIGHTS: This study for the first time examined the neural mechanisms of how infant-directed speech (IDS) facilitates word learning in toddlers. Using fNIRS, we identified the cortical regions that were directly involved in IDS processing. Our results suggest that IDS facilitates word learning by engaging a right-lateralized prosody processing and top-down attentional mechanisms in the left frontoparietal networks. The language network including the inferior frontal gyrus and temporal cortex was not directly involved in IDS processing to support word learning.
Collapse
|
5
|
Cleaning up the Brickyard: How Theory and Methodology Shape Experiments in Cognitive Neuroscience of Language. J Cogn Neurosci 2023; 35:2067-2088. [PMID: 37713672 DOI: 10.1162/jocn_a_02058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/17/2023]
Abstract
The capacity for language is a defining property of our species, yet despite decades of research, evidence on its neural basis is still mixed and a generalized consensus is difficult to achieve. We suggest that this is partly caused by researchers defining "language" in different ways, with focus on a wide range of phenomena, properties, and levels of investigation. Accordingly, there is very little agreement among cognitive neuroscientists of language on the operationalization of fundamental concepts to be investigated in neuroscientific experiments. Here, we review chains of derivation in the cognitive neuroscience of language, focusing on how the hypothesis under consideration is defined by a combination of theoretical and methodological assumptions. We first attempt to disentangle the complex relationship between linguistics, psychology, and neuroscience in the field. Next, we focus on how conclusions that can be drawn from any experiment are inherently constrained by auxiliary assumptions, both theoretical and methodological, on which the validity of conclusions drawn rests. These issues are discussed in the context of classical experimental manipulations as well as study designs that employ novel approaches such as naturalistic stimuli and computational modeling. We conclude by proposing that a highly interdisciplinary field such as the cognitive neuroscience of language requires researchers to form explicit statements concerning the theoretical definitions, methodological choices, and other constraining factors involved in their work.
Collapse
|
6
|
Cognitive Neuroscience Perspectives on Language Acquisition and Processing. Brain Sci 2023; 13:1613. [PMID: 38137061 PMCID: PMC10741862 DOI: 10.3390/brainsci13121613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 10/20/2023] [Indexed: 12/24/2023] Open
Abstract
The earliest investigations of the neural implementation of language started with examining patients with various types of disorders and underlying brain damage [...].
Collapse
|
7
|
Spatial navigation and memory: A review of the similarities and differences relevant to brain models and age. Neuron 2023; 111:1037-1049. [PMID: 37023709 PMCID: PMC10083890 DOI: 10.1016/j.neuron.2023.03.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 02/23/2023] [Accepted: 02/27/2023] [Indexed: 04/07/2023]
Abstract
Spatial navigation and memory are often seen as heavily intertwined at the cognitive and neural levels of analysis. We review models that hypothesize a central role for the medial temporal lobes, including the hippocampus, in both navigation and aspects of memory, particularly allocentric navigation and episodic memory. While these models have explanatory power in instances in which they overlap, they are limited in explaining functional and neuroanatomical differences. Focusing on human cognition, we explore the idea of navigation as a dynamically acquired skill and memory as an internally driven process, which may better account for the differences between the two. We also review network models of navigation and memory, which place a greater emphasis on connections rather than the functions of focal brain regions. These models, in turn, may have greater explanatory power for the differences between navigation and memory and the differing effects of brain lesions and age.
Collapse
|
8
|
A Critical Perspective on the (Neuro)biological Foundations of Language and Linguistic Cognition. Integr Psychol Behav Sci 2022:10.1007/s12124-022-09741-0. [PMID: 36562960 DOI: 10.1007/s12124-022-09741-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/12/2022] [Indexed: 12/24/2022]
Abstract
The biological foundations of language reflect assumptions about the way language and biology relate to one another, and with the rise of biological studies of language, we appear to have come closer to a deep understanding of linguistic cognition-the part of cognition constituted by language. This article argues that relations of neurobiological and genetic instantiation between linguistic cognition and the underlying biological substrate are ultimately irrelevant to understanding the higher-level structure and form of language. Linguistic patterns and those that make up the character of cognition constituted by language do not simply arise from the biological substrate because higher-level structures typically assume forms based on constraints that only emerge once these new levels are constructed. The goal is not to show how the mapping problem between linguistic cognition and neurobiology can be solved. Rather, the goal is to show the mapping problem ceases to exist once a different understanding of language-(neuro)biology relations is embraced. With this goal, this article first uncovers a number of logical and conceptual fallacies in strategies deployed in understanding language-(neuro)biology relations. After having shown these flaws, the article offers an alternative view of language-biology relations that shows how biological constraints shape language (nature and form), making it what it is.
Collapse
|
9
|
Computational complexity explains neural differences in quantifier verification. Cognition 2022; 223:105013. [DOI: 10.1016/j.cognition.2022.105013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 12/29/2021] [Accepted: 01/04/2022] [Indexed: 11/20/2022]
|
10
|
Moving beyond domain-specific vs. domain-general options in cognitive neuroscience. Cortex 2022; 154:259-268. [DOI: 10.1016/j.cortex.2022.05.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Revised: 04/07/2022] [Accepted: 05/11/2022] [Indexed: 11/26/2022]
|
11
|
The evolution of combinatoriality and compositionality in hominid tool use: a comparative perspective. INT J PRIMATOL 2022. [DOI: 10.1007/s10764-021-00267-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
12
|
Functional differentiation in the language network revealed by lesion-symptom mapping. Neuroimage 2021; 247:118778. [PMID: 34896587 PMCID: PMC8830186 DOI: 10.1016/j.neuroimage.2021.118778] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2021] [Revised: 11/17/2021] [Accepted: 12/02/2021] [Indexed: 12/18/2022] Open
Abstract
Theories of language organization in the brain commonly posit that different regions underlie distinct linguistic mechanisms. However, such theories have been criticized on the grounds that many neuroimaging studies of language processing find similar effects across regions. Moreover, condition by region interaction effects, which provide the strongest evidence of functional differentiation between regions, have rarely been offered in support of these theories. Here we address this by using lesion-symptom mapping in three large, partially-overlapping groups of aphasia patients with left hemisphere brain damage due to stroke (N = 121, N = 92, N = 218). We identified multiple measure by region interaction effects, associating damage to the posterior middle temporal gyrus with syntactic comprehension deficits, damage to posterior inferior frontal gyrus with expressive agrammatism, and damage to inferior angular gyrus with semantic category word fluency deficits. Our results are inconsistent with recent hypotheses that regions of the language network are undifferentiated with respect to high-level linguistic processing.
Collapse
|
13
|
Abstract
Syntax, the structure of sentences, enables humans to express an infinite range of meanings through finite means. The neurobiology of syntax has been intensely studied but with little consensus. Two main candidate regions have been identified: the posterior inferior frontal gyrus (pIFG) and the posterior middle temporal gyrus (pMTG). Integrating research in linguistics, psycholinguistics, and neuroscience, we propose a neuroanatomical framework for syntax that attributes distinct syntactic computations to these regions in a unified model. The key theoretical advances are adopting a modern lexicalized view of syntax in which the lexicon and syntactic rules are intertwined, and recognizing a computational asymmetry in the role of syntax during comprehension and production. Our model postulates a hierarchical lexical-syntactic function to the pMTG, which interconnects previously identified speech perception and conceptual-semantic systems in the temporal and inferior parietal lobes, crucial for both sentence production and comprehension. These relational hierarchies are transformed via the pIFG into morpho-syntactic sequences, primarily tied to production. We show how this architecture provides a better account of the full range of data and is consistent with recent proposals regarding the organization of phonological processes in the brain.
Collapse
|
14
|
The processing of pseudoword form and meaning in production and comprehension: A computational modeling approach using linear discriminative learning. Behav Res Methods 2021; 53:945-976. [PMID: 32377973 PMCID: PMC8219637 DOI: 10.3758/s13428-020-01356-w] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Pseudowords have long served as key tools in psycholinguistic investigations of the lexicon. A common assumption underlying the use of pseudowords is that they are devoid of meaning: Comparing words and pseudowords may then shed light on how meaningful linguistic elements are processed differently from meaningless sound strings. However, pseudowords may in fact carry meaning. On the basis of a computational model of lexical processing, linear discriminative learning (LDL Baayen et al., Complexity, 2019, 1-39, 2019), we compute numeric vectors representing the semantics of pseudowords. We demonstrate that quantitative measures gauging the semantic neighborhoods of pseudowords predict reaction times in the Massive Auditory Lexical Decision (MALD) database (Tucker et al., 2018). We also show that the model successfully predicts the acoustic durations of pseudowords. Importantly, model predictions hinge on the hypothesis that the mechanisms underlying speech production and comprehension interact. Thus, pseudowords emerge as an outstanding tool for gauging the resonance between production and comprehension. Many pseudowords in the MALD database contain inflectional suffixes. Unlike many contemporary models, LDL captures the semantic commonalities of forms sharing inflectional exponents without using the linguistic construct of morphemes. We discuss methodological and theoretical implications for models of lexical processing and morphological theory. The results of this study, complementing those on real words reported in Baayen et al., (Complexity, 2019, 1-39, 2019), thus provide further evidence for the usefulness of LDL both as a cognitive model of the mental lexicon, and as a tool for generating new quantitative measures that are predictive for human lexical processing.
Collapse
|
15
|
Abstract
The brain basis of language, music, and emotion can be studied from the perspective of the psychological and cognitive sciences. Does this approach link to concerns of the humanities meaningfully? We outline prospects of developing a genuine neurohumanities research program.
Collapse
|
16
|
Against the Epistemological Primacy of the Hardware: The Brain from Inside Out, Turned Upside Down. eNeuro 2020; 7:7/4/ENEURO.0215-20.2020. [PMID: 32769167 PMCID: PMC7415919 DOI: 10.1523/eneuro.0215-20.2020] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 05/26/2020] [Indexed: 11/21/2022] Open
Abstract
Before he wrote the recent book The Brain from Inside Out, the neuroscientist György Buzsáki previewed some of the arguments in a paper written 20 years ago (“The brain-cognitive behavior problem: a retrospective”), now finally published. The principal focus of the paper is the relationship between neuroscience and psychology. The direction in which that research had proceeded, and continues now, is, in his view, fundamentally misguided. Building on the critique, Buzsáki presents arguments for an “inside-out” approach, wherein the study of neurobiological objects has primacy over using psychological concepts to study the brain, and should, in fact, give rise to them. We argue that he is too pessimistic, and actually not quite right, about how the relation between cognition and neuroscience can be studied. Second, we are not in agreement with the normative recommendation of how to proceed: a predominantly brain first, implementation-driven research agenda. Finally, we raise concerns about the philosophical underpinning of the research program he advances. Buzsáki’s perspective merits careful examination, and we suggest that it can be linked in a productive way to ongoing research, aligning his inside-out approach with current work that yields convincing accounts of mind and brain.
Collapse
|
17
|
Abstract
Abstract
Hierarchical structure and compositionality imbue human language with unparalleled expressive power and set it apart from other perception–action systems. However, neither formal nor neurobiological models account for how these defining computational properties might arise in a physiological system. I attempt to reconcile hierarchy and compositionality with principles from cell assembly computation in neuroscience; the result is an emerging theory of how the brain could convert distributed perceptual representations into hierarchical structures across multiple timescales while representing interpretable incremental stages of (de)compositional meaning. The model's architecture—a multidimensional coordinate system based on neurophysiological models of sensory processing—proposes that a manifold of neural trajectories encodes sensory, motor, and abstract linguistic states. Gain modulation, including inhibition, tunes the path in the manifold in accordance with behavior and is how latent structure is inferred. As a consequence, predictive information about upcoming sensory input during production and comprehension is available without a separate operation. The proposed processing mechanism is synthesized from current models of neural entrainment to speech, concepts from systems neuroscience and category theory, and a symbolic-connectionist computational model that uses time and rhythm to structure information. I build on evidence from cognitive neuroscience and computational modeling that suggests a formal and mechanistic alignment between structure building and neural oscillations, and moves toward unifying basic insights from linguistics and psycholinguistics with the currency of neural computation.
Collapse
|
18
|
Hierarchical Structure in Sequence Processing: How to Measure It and Determine Its Neural Implementation. Top Cogn Sci 2020; 12:910-924. [PMID: 31364310 PMCID: PMC7496673 DOI: 10.1111/tops.12442] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2018] [Revised: 06/17/2019] [Accepted: 06/17/2019] [Indexed: 12/30/2022]
Abstract
In many domains of human cognition, hierarchically structured representations are thought to play a key role. In this paper, we start with some foundational definitions of key phenomena like "sequence" and "hierarchy," and then outline potential signatures of hierarchical structure that can be observed in behavioral and neuroimaging data. Appropriate behavioral methods include classic ones from psycholinguistics along with some from the more recent artificial grammar learning and sentence processing literature. We then turn to neuroimaging evidence for hierarchical structure with a focus on the functional MRI literature. We conclude that, although a broad consensus exists about a role for a neural circuit incorporating the inferior frontal gyrus, the superior temporal sulcus, and the arcuate fasciculus, considerable uncertainty remains about the precise computational function(s) of this circuitry. An explicit theoretical framework, combined with an empirical approach focusing on distinguishing between plausible alternative hypotheses, will be necessary for further progress.
Collapse
|
19
|
In Search of a New Paradigm for Functional Magnetic Resonance Experimentation With Language. Front Neurol 2020; 11:588. [PMID: 32670188 PMCID: PMC7326770 DOI: 10.3389/fneur.2020.00588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Accepted: 05/22/2020] [Indexed: 11/23/2022] Open
Abstract
Human language can convey a broad range of entities and relationships through processes that are highly complex and structured. All of these processes are happening somewhere inside our brains, and one way of precising these locations is through the usage of the functional magnetic resonance imaging. The great obstacle when experimenting with complex processes, however, is the need to control them while still having data that are representative of reality. When it comes to language, an interactional phenomenon in its nature, and that integrates a wide range of processes, a question emerges concerning how compatible it is with the current experimental methodology, and how much of it is lost in order to fit the controlled experimental environment. Because of its particularities, the fMRI technique imposes several limitations to the expression of language during experimentation. This paper discusses the different conceptions of language as a research object, the hardships of combining this object with the requirements of fMRI, and what are the current perspectives for this field of research.
Collapse
|
20
|
Disentangling sequential from hierarchical learning in Artificial Grammar Learning: Evidence from a modified Simon Task. PLoS One 2020; 15:e0232687. [PMID: 32407332 PMCID: PMC7224470 DOI: 10.1371/journal.pone.0232687] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 04/19/2020] [Indexed: 11/19/2022] Open
Abstract
In this paper we probe the interaction between sequential and hierarchical learning by investigating implicit learning in a group of school-aged children. We administered a serial reaction time task, in the form of a modified Simon Task in which the stimuli were organised following the rules of two distinct artificial grammars, specifically Lindenmayer systems: the Fibonacci grammar (Fib) and the Skip grammar (a modification of the former). The choice of grammars is determined by the goal of this study, which is to investigate how sensitivity to structure emerges in the course of exposure to an input whose surface transitional properties (by hypothesis) bootstrap structure. The studies conducted to date have been mainly designed to investigate low-level superficial regularities, learnable in purely statistical terms, whereas hierarchical learning has not been effectively investigated yet. The possibility to directly pinpoint the interplay between sequential and hierarchical learning is instead at the core of our study: we presented children with two grammars, Fib and Skip, which share the same transitional regularities, thus providing identical opportunities for sequential learning, while crucially differing in their hierarchical structure. More particularly, there are specific points in the sequence (k-points), which, despite giving rise to the same transitional regularities in the two grammars, support hierarchical reconstruction in Fib but not in Skip. In our protocol, children were simply asked to perform a traditional Simon Task, and they were completely unaware of the real purposes of the task. Results indicate that sequential learning occurred in both grammars, as shown by the decrease in reaction times throughout the task, while differences were found in the sensitivity to k-points: these, we contend, play a role in hierarchical reconstruction in Fib, whereas they are devoid of structural significance in Skip. More particularly, we found that children were faster in correspondence to k-points in sequences produced by Fib, thus providing an entirely new kind of evidence for the hypothesis that implicit learning involves an early activation of strategies of hierarchical reconstruction, based on a straightforward interplay with the statistically-based computation of transitional regularities on the sequences of symbols.
Collapse
|
21
|
Grid coding, spatial representation, and navigation: Should we assume an isomorphism? Hippocampus 2020; 30:422-432. [PMID: 31742364 PMCID: PMC7409510 DOI: 10.1002/hipo.23175] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 10/21/2019] [Accepted: 10/28/2019] [Indexed: 12/11/2022]
Abstract
Grid cells provide a compelling example of a link between cellular activity and an abstract and difficult to define concept like space. Accordingly, a representational perspective on grid coding argues that neural grid coding underlies a fundamentally spatial metric. Recently, some theoretical proposals have suggested extending such a framework to nonspatial cognition as well, such as category learning. Here, we provide a critique of the frequently employed assumption of an isomorphism between patterns of neural activity (e.g., grid cells), mental representation, and behavior (e.g., navigation). Specifically, we question the strict isomorphism between these three levels and suggest that human spatial navigation is perhaps best characterized by a wide variety of both metric and nonmetric strategies. We offer an alternative perspective on how grid coding might relate to human spatial navigation, arguing that grid coding is part of a much larger conglomeration of neural activity patterns that dynamically tune to accomplish specific behavioral outputs.
Collapse
|
22
|
Merge-Generability as the Key Concept of Human Language: Evidence From Neuroscience. Front Psychol 2019; 10:2673. [PMID: 31849777 PMCID: PMC6895067 DOI: 10.3389/fpsyg.2019.02673] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Accepted: 11/13/2019] [Indexed: 11/21/2022] Open
Abstract
Ever since the inception of generative linguistics, various dependency patterns have been widely discussed in the literature, particularly as they pertain to the hierarchy based on “weak generation” – the so-called Chomsky Hierarchy. However, humans can make any possible dependency patterns by using artificial means on a sequence of symbols (e.g., computer programing). The differences between sentences in human language and general symbol sequences have been routinely observed, but the question as to why such differences exist has barely been raised. Here, we address this problem and propose a theoretical explanation in terms of a new concept of “Merge-generability,” that is, whether the structural basis for a given dependency is provided by the fundamental operation Merge. In our functional magnetic resonance imaging (fMRI) study, we tested the judgments of noun phrase (NP)-predicate (Pred) pairings in sentences of Japanese, an SOV language that allows natural, unbounded nesting configurations. We further introduced two pseudo-adverbs, which artificially force dependencies that do not conform to structures generated by Merge, i.e., non-Merge-generable; these adverbs enable us to manipulate Merge-generability (Natural or Artificial). By employing this novel paradigm, we obtained the following results. Firstly, the behavioral data clearly showed that an NP-Pred matching task became more demanding under the Artificial conditions than under the Natural conditions, reflecting cognitive loads that could be covaried with the increased number of words. Secondly, localized activation in the left frontal cortex, as well as in the left middle temporal gyrus and angular gyrus, was observed for the [Natural – Artificial] contrast, indicating specialization of these left regions in syntactic processing. Any activation due to task difficulty was completely excluded from activations in these regions, because the Natural conditions were always easier than the Artificial ones. And finally, the [Artificial – Natural] contrast resulted in the dorsal portion of the left frontal cortex, together with wide-spread regions required for general cognitive demands. These results indicate that Merge-generable sentences are processed in these specific regions in contrast to non-Merge-generable sentences, demonstrating that Merge is indeed a fundamental operation, which comes into play especially under the Natural conditions.
Collapse
|
23
|
Electrophysiological evidence of phonemotopic representations of vowels in the primary and secondary auditory cortex. Cortex 2019; 121:385-398. [PMID: 31678684 DOI: 10.1016/j.cortex.2019.09.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Revised: 05/18/2019] [Accepted: 09/20/2019] [Indexed: 11/25/2022]
Abstract
How the brain encodes the speech acoustic signal into phonological representations is a fundamental question for the neurobiology of language. Determining whether this process is characterized by tonotopic maps in primary or secondary auditory areas, with bilateral or leftward activity, remains a long-standing challenge. Magnetoencephalographic studies failed to show hierarchical and asymmetric hints for speech processing. We employed high-density electroencephalography to map the Salento Italian vowel system onto cortical sources using the N1 auditory evoked component. We found evidence that the N1 is characterized by hierarchical and asymmetrical indexes in primary and secondary auditory areas structuring vowel representations. Importantly, the N1 was characterized by early and late phases. The early N1 peaked at 125-135 msec and was localized in the primary auditory cortex; the late N1 peaked at 145-155 msec and was localized in the left superior temporal gyrus. We showed that early in the primary auditory cortex, the cortical spatial arrangements-along the lateral-medial and anterior-posterior gradients-are broadly warped by phonemotopic patterns according to the distinctive feature principle. These phonemotopic patterns are carefully refined in the superior temporal gyrus along the inferior-superior and anterior-posterior gradients. The dynamical and hierarchical interface between primary and secondary auditory areas and the interaction effects between Height and Place features generate the categorical representation of the Salento Italian vowels.
Collapse
|
24
|
Processing Sentences With Multiple Negations: Grammatical Structures That Are Perceived as Unacceptable. Front Psychol 2019; 10:2346. [PMID: 31695644 PMCID: PMC6817463 DOI: 10.3389/fpsyg.2019.02346] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Accepted: 10/01/2019] [Indexed: 11/13/2022] Open
Abstract
This investigation draws from research on negative polarity item (NPI) illusions in order to explore a new and interesting instance of misalignment observed for grammatical sentences containing two negative markers. Previous research has shown that unlicensed NPIs can be perceived as acceptable when occurring soon after a structurally inaccessible negation (e.g., ever in *The bills that no senators voted for have ever become law). Here we examine the opposite configuration: grammatical sentences created by substituting the NPI ever with the negative adverb never (e.g., The bills that no senators voted for have never become law). The processing and acceptability of these sentences were studied using three tasks: a speeded acceptability judgment (Experiment 1), a self-paced reading task (Experiment 2), and an offline acceptability rating (Experiment 3). The results are consistent across measures in showing that the integration of the adverb never is disrupted by the linearly preceding but structurally inaccessible negative quantifier no in the relative clause. In our view, this pattern of results is in line with Parker and Phillips' (2016) proposal that NPI illusions arise when the context containing the inaccessible negation has not been fully encoded by the time the NPI ever is encountered, making the embedded negative quantifier transparently available as a licensor. In a similar vein, the disruption effects observed for grammatical sentences containing two negative elements could arise if the negative quantifier is still being integrated when never is encountered, forcing the parser to deal with two negative elements simultaneously. This interpretation suggests that the same incomplete encodings that could be ameliorating the online perception of unlicensed NPIs could also be responsible for deteriorating the perception of the sentences under investigation here. This would represent an illusion of ungrammaticality. Furthermore, these results provide evidence against the speculation that NPI illusions are the consequence of misrepresenting ever as its near neighbor never, given that continuations with never are judged as unacceptable in spite of their grammaticality. Together, these findings inform the landscape of hypotheses on NPI illusions and offer valuable insights into the complexity of multiple negations and the relation between processing difficulty and acceptability.
Collapse
|
25
|
Merging Generative Linguistics and Psycholinguistics. Front Psychol 2018; 9:2283. [PMID: 30546329 PMCID: PMC6279886 DOI: 10.3389/fpsyg.2018.02283] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Accepted: 11/02/2018] [Indexed: 12/17/2022] Open
|
26
|
Abstract
It is clear that the left inferior frontal gyrus (LIFG) contributes in some fashion to sentence processing. While neuroimaging and neuropsychological evidence support a domain-general working memory function, recent neuroimaging data show that particular subregions of the LIFG, particularly the pars triangularis (pTri), show selective activation for sentences relative to verbal working memory and cognitive control tasks. These data suggest a language-specific function rather than a domain-general one. To resolve this apparent conflict, I propose separating claims of domain-generality and specificity independently for computations and representations-a given brain region may respond to a specific representation while performing a general computation over that representation, one shared with other systems. I hypothesize that the pTri underlies a language-specific working memory system, comprised of general memory retrieval/attention operations specialized for syntactic representations. There is a parallelism of top-down retrieval function among the phonological and semantic levels, localized to the pars opercularis and pars orbitalis, respectively. I further explore the idea of how such a system emerges in the human brain through the framework of neuronal retuning: the "borrowing" of domain-general mechanisms for language, either in evolution or development. The empirical data appear to tentatively support a developmental account of language-specificity in the pTri, possibly through connections to the posterior superior temporal sulcus (pSTS), a region that is both anatomically distinct for humans and functionally essential for language. Evidence of representational response specificity obtained from neuroimaging studies is useful in understanding how cognition is implemented in the brain. However, understanding the shared computations across domains and neural systems is necessary for a fuller understanding of this problem, providing potential answers to questions of how specialized systems, such as language, are implemented in the brain.
Collapse
|
27
|
Grounding the neurobiology of language in first principles: The necessity of non-language-centric explanations for language comprehension. Cognition 2018; 180:135-157. [PMID: 30053570 PMCID: PMC6145924 DOI: 10.1016/j.cognition.2018.06.018] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2017] [Revised: 06/05/2018] [Accepted: 06/24/2018] [Indexed: 12/26/2022]
Abstract
Recent decades have ushered in tremendous progress in understanding the neural basis of language. Most of our current knowledge on language and the brain, however, is derived from lab-based experiments that are far removed from everyday language use, and that are inspired by questions originating in linguistic and psycholinguistic contexts. In this paper we argue that in order to make progress, the field needs to shift its focus to understanding the neurobiology of naturalistic language comprehension. We present here a new conceptual framework for understanding the neurobiological organization of language comprehension. This framework is non-language-centered in the computational/neurobiological constructs it identifies, and focuses strongly on context. Our core arguments address three general issues: (i) the difficulty in extending language-centric explanations to discourse; (ii) the necessity of taking context as a serious topic of study, modeling it formally and acknowledging the limitations on external validity when studying language comprehension outside context; and (iii) the tenuous status of the language network as an explanatory construct. We argue that adopting this framework means that neurobiological studies of language will be less focused on identifying correlations between brain activity patterns and mechanisms postulated by psycholinguistic theories. Instead, they will be less self-referential and increasingly more inclined towards integration of language with other cognitive systems, ultimately doing more justice to the neurobiological organization of language and how it supports language as it is used in everyday life.
Collapse
|
28
|
|
29
|
|
30
|
Backward Dependencies and in-Situ wh-Questions as Test Cases on How to Approach Experimental Linguistics Research That Pursues Theoretical Linguistics Questions. Front Psychol 2018; 8:2237. [PMID: 29375417 PMCID: PMC5769353 DOI: 10.3389/fpsyg.2017.02237] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2017] [Accepted: 12/11/2017] [Indexed: 11/13/2022] Open
Abstract
The empirical study of language is a young field in contemporary linguistics. This being the case, and following a natural development process, the field is currently at a stage where different research methods and experimental approaches are being put into question in terms of their validity. Without pretending to provide an answer with respect to the best way to conduct linguistics related experimental research, in this article we aim at examining the process that researchers follow in the design and implementation of experimental linguistics research with a goal to validate specific theoretical linguistic analyses. First, we discuss the general challenges that experimental work faces in finding a compromise between addressing theoretically relevant questions and being able to implement these questions in a specific controlled experimental paradigm. We discuss the Granularity Mismatch Problem (Poeppel and Embick, 2005) which addresses the challenges that research that is trying to bridge the representations and computations of language and their psycholinguistic/neurolinguistic evidence faces, and the basic assumptions that interdisciplinary research needs to consider due to the different conceptual granularity of the objects under study. To illustrate the practical implications of the points addressed, we compare two approaches to perform linguistic experimental research by reviewing a number of our own studies strongly grounded on theoretically informed questions. First, we show how linguistic phenomena similar at a conceptual level can be tested within the same language using measurement of event-related potentials (ERP) by discussing results from two ERP experiments on the processing of long-distance backward dependencies that involve coreference and negative polarity items respectively in Dutch. Second, we examine how the same linguistic phenomenon can be tested in different languages using reading time measures by discussing the outcome of four self-paced reading experiments on the processing of in-situ wh-questions in Mandarin Chinese and French. Finally, we review the implications that our findings have for the specific theoretical linguistics questions that we originally aimed to address. We conclude with an overview of the general insights that can be gained from the role of structural hierarchy and grammatical constraints in processing and the existing limitations on the generalization of results.
Collapse
|
31
|
Only time will tell - why temporal information is essential for our neuroscientific understanding of semantics. Psychon Bull Rev 2017; 23:1072-9. [PMID: 27294424 PMCID: PMC4974259 DOI: 10.3758/s13423-015-0873-9] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Theoretical developments about the nature of semantic representations and processes should be accompanied by a discussion of how these theories can be validated on the basis of empirical data. Here, I elaborate on the link between theory and empirical research, highlighting the need for temporal information in order to distinguish fundamental aspects of semantics. The generic point that fast cognitive processes demand fast measurement techniques has been made many times before, although arguably more often in the psychophysiological community than in the metabolic neuroimaging community. Many reviews on the neuroscience of semantics mostly or even exclusively focus on metabolic neuroimaging data. Following an analysis of semantics in terms of the representations and processes involved, I argue that fundamental theoretical debates about the neuroscience of semantics can only be concluded on the basis of data with sufficient temporal resolution. Any "semantic effect" may result from a conflation of long-term memory representations, retrieval and working memory processes, mental imagery, and episodic memory. This poses challenges for all neuroimaging modalities, but especially for those with low temporal resolution. It also throws doubt on the usefulness of contrasts between meaningful and meaningless stimuli, which may differ on a number of semantic and non-semantic dimensions. I will discuss the consequences of this analysis for research on the role of convergence zones or hubs and distributed modal brain networks, top-down modulation of task and context as well as interactivity between levels of the processing hierarchy, for example in the framework of predictive coding.
Collapse
|
32
|
Probabilistic language models in cognitive neuroscience: Promises and pitfalls. Neurosci Biobehav Rev 2017; 83:579-588. [PMID: 28887227 DOI: 10.1016/j.neubiorev.2017.09.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2016] [Revised: 07/19/2017] [Accepted: 09/02/2017] [Indexed: 11/19/2022]
Abstract
Cognitive neuroscientists of language comprehension study how neural computations relate to cognitive computations during comprehension. On the cognitive part of the equation, it is important that the computations and processing complexity are explicitly defined. Probabilistic language models can be used to give a computationally explicit account of language complexity during comprehension. Whereas such models have so far predominantly been evaluated against behavioral data, only recently have the models been used to explain neurobiological signals. Measures obtained from these models emphasize the probabilistic, information-processing view of language understanding and provide a set of tools that can be used for testing neural hypotheses about language comprehension. Here, we provide a cursory review of the theoretical foundations and example neuroimaging studies employing probabilistic language models. We highlight the advantages and potential pitfalls of this approach and indicate avenues for future research.
Collapse
|
33
|
|
34
|
The role of the IFG and pSTS in syntactic prediction: Evidence from a parametric study of hierarchical structure in fMRI. Cortex 2017; 88:106-123. [DOI: 10.1016/j.cortex.2016.12.010] [Citation(s) in RCA: 65] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2016] [Revised: 09/01/2016] [Accepted: 12/09/2016] [Indexed: 10/20/2022]
|
35
|
Commentary: "An Evaluation of Universal Grammar and the Phonological Mind"-UG Is Still a Viable Hypothesis. Front Psychol 2016; 7:1029. [PMID: 27471480 PMCID: PMC4943953 DOI: 10.3389/fpsyg.2016.01029] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2016] [Accepted: 06/23/2016] [Indexed: 11/16/2022] Open
Abstract
Everett (2016b) criticizes The Phonological Mind thesis (Berent, 2013a,b) on logical, methodological and empirical grounds. Most of Everett’s concerns are directed toward the hypothesis that the phonological grammar is constrained by universal grammatical (UG) principles. Contrary to Everett’s logical challenges, here I show that the UG hypothesis is readily falsifiable, that universality is not inconsistent with innateness (Everett’s arguments to the contrary are rooted in a basic confusion of the UG phenotype and the genotype), and that its empirical evaluation does not require a full evolutionary account of language. A detailed analysis of one case study, the syllable hierarchy, presents a specific demonstration that people have knowledge of putatively universal principles that are unattested in their language and these principles are most likely linguistic in nature. Whether Universal Grammar exists remains unknown, but Everett’s arguments hardly undermine the viability of this hypothesis.
Collapse
|
36
|
Abstract
This paper examines the factors conditioning the production of linguistic variables in real time by individual speakers: the study of what we term the dynamics of variation in individuals. We propose a framework that recognizes three types of factors conditioning variation: sociostylistic (s-), internal linguistic (i-), and psychophysiological (p-). We develop two main points against this background. The first is that sequences of variants produced by individuals display systematic patterns that can be understood in terms of s-conditioning and p-conditioning (with a focus on the latter). The second main point is that p-conditioning and i-conditioning are distinct in their mental implementations; this claim has implications for understanding the locality of the factors conditioning alternations, for the universality and language-specificity of variation, and for the general question of whether grammar and language use are distinct. Throughout the paper, questions about the dynamics of variation in individuals are set against the typical community-centered variationist perspective, with an eye towards showing how findings in the two domains, though differing in explanatory focus, can ultimately be mutually informative.
Collapse
|
37
|
Abstract
Neural oscillations at distinct frequencies are increasingly being related to a number of basic and higher cognitive faculties. Oscillations enable the construction of coherently organized neuronal assemblies through establishing transitory temporal correlations. By exploring the elementary operations of the language faculty—labeling, concatenation, cyclic transfer—alongside neural dynamics, a new model of linguistic computation is proposed. It is argued that the universality of language, and the true biological source of Universal Grammar, is not to be found purely in the genome as has long been suggested, but more specifically within the extraordinarily preserved nature of mammalian brain rhythms employed in the computation of linguistic structures. Computational-representational theories are used as a guide in investigating the neurobiological foundations of the human “cognome”—the set of computations performed by the nervous system—and new directions are suggested for how the dynamics of the brain (the “dynome”) operate and execute linguistic operations. The extent to which brain rhythms are the suitable neuronal processes which can capture the computational properties of the human language faculty is considered against a backdrop of existing cartographic research into the localization of linguistic interpretation. Particular focus is placed on labeling, the operation elsewhere argued to be species-specific. A Basic Label model of the human cognome-dynome is proposed, leading to clear, causally-addressable empirical predictions, to be investigated by a suggested research program, Dynamic Cognomics. In addition, a distinction between minimal and maximal degrees of explanation is introduced to differentiate between the depth of analysis provided by cartographic, rhythmic, neurochemical, and other approaches to computation.
Collapse
|
38
|
Proceedings of the Seventh International Workshop on Advances in Electrocorticography. Epilepsy Behav 2015; 51:312-20. [PMID: 26322594 PMCID: PMC4593746 DOI: 10.1016/j.yebeh.2015.08.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/31/2015] [Accepted: 08/01/2015] [Indexed: 10/23/2022]
Abstract
The Seventh International Workshop on Advances in Electrocorticography (ECoG) convened in Washington, DC, on November 13-14, 2014. Electrocorticography-based research continues to proliferate widely across basic science and clinical disciplines. The 2014 workshop highlighted advances in neurolinguistics, brain-computer interface, functional mapping, and seizure termination facilitated by advances in the recording and analysis of the ECoG signal. The following proceedings document summarizes the content of this successful multidisciplinary gathering.
Collapse
|
39
|
Linguistic explanation and domain specialization: a case study in bound variable anaphora. Front Psychol 2015; 6:1421. [PMID: 26441791 PMCID: PMC4585305 DOI: 10.3389/fpsyg.2015.01421] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2015] [Accepted: 09/07/2015] [Indexed: 01/29/2023] Open
Abstract
The core question behind this Frontiers research topic is whether explaining linguistic phenomena requires appeal to properties of human cognition that are specialized to language. We argue here that investigating this issue requires taking linguistic research results seriously, and evaluating these for domain-specificity. We present a particular empirical phenomenon, bound variable interpretations of pronouns dependent on a quantifier phrase, and argue for a particular theory of this empirical domain that is couched at a level of theoretical depth which allows its principles to be evaluated for domain-specialization. We argue that the relevant principles are specialized when they apply in the domain of language, even if analogs of them are plausibly at work elsewhere in cognition or the natural world more generally. So certain principles may be specialized to language, though not, ultimately, unique to it. Such specialization is underpinned by ultimately biological factors, hence part of UG.
Collapse
|
40
|
It is an organ, it is new, but it is not a new organ. Conceptualizing language from a homological perspective. Front Ecol Evol 2015. [DOI: 10.3389/fevo.2015.00058] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
41
|
Labels, cognomes, and cyclic computation: an ethological perspective. Front Psychol 2015; 6:715. [PMID: 26089809 PMCID: PMC4453271 DOI: 10.3389/fpsyg.2015.00715] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2015] [Accepted: 05/13/2015] [Indexed: 01/08/2023] Open
Abstract
For the past two decades, it has widely been assumed by linguists that there is a single computational operation, Merge, which is unique to language, distinguishing it from other cognitive domains. The intention of this paper is to progress the discussion of language evolution in two ways: (i) survey what the ethological record reveals about the uniqueness of the human computational system, and (ii) explore how syntactic theories account for what ethology may determine to be human-specific. It is shown that the operation Label, not Merge, constitutes the evolutionary novelty which distinguishes human language from non-human computational systems; a proposal lending weight to a Weak Continuity Hypothesis and leading to the formation of what is termed Computational Ethology. Some directions for future ethological research are suggested.
Collapse
|