1
|
Stern P, Kolodny T, Tsafrir S, Cohen G, Shalev L. Unique Patterns of Eye Movements Characterizing Inattentive Reading in ADHD. J Atten Disord 2024; 28:1008-1016. [PMID: 38327026 DOI: 10.1177/10870547231223728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/09/2024]
Abstract
OBJECTIVE We aimed to identify unique patterns of eye-movements measures reflecting inattentive reading among adults with and without ADHD. METHOD & RESULTS We recorded eye-movements during uninterrupted text reading of typically developed (TD) and ADHD adults. First, we found significantly longer reading time for the ADHD group than the TD group. Further, we detected cases in which words were reread more than twice and found that such occasions were much more frequent in participants with ADHD than in TD participants. Moreover, we discovered that the first reading pass of these words was less sensitive to the length of the word than the first pass of words read only once, indicating a less meaningful reading. CONCLUSION We propose that high rate of words that were reread is a correlate of inattentive reading which is more pronounced among ADHD readers. Implications of the findings in the context of reading comprehension are discussed.
Collapse
|
2
|
Parshina O, Zdorova N, Kuperman V. Cross-linguistic comparison in reading sentences of uniform length: Visual-perceptual demands override readers' experience. Q J Exp Psychol (Hove) 2023:17470218231206719. [PMID: 37787470 DOI: 10.1177/17470218231206719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Accurate saccadic targeting is critical for efficient reading and is driven by the sensory input under the eye-gaze. Yet whether a reader's experience with the distributional properties of their written language also influences saccadic targeting is an open debate. This study of Russian sentence reading follows Cutter et al.'s (2017) study in English and presents readers with sentences consisting of words of the same length. We hypothesised that if the readers' experience matters as per discrete control account, Russian readers would produce longer saccades and farther landing positions than the ones produced by English readers. On the contrary, if the saccadic targeting is primarily driven by the immediate perceptual demands that override readers' experience as per the dynamic adjustment account, the saccades of Russian and English readers would be of the same length, resulting in similar landing positions. The results in both Cutter et al. and the present study provided evidence for the latter account: Russian readers showed rapid and accurate adjustment of saccade lengths and landing positions to the highly constrained input. Crucially, the saccade lengths and landing positions did not differ between English and Russian readers even in the cross-linguistically length-matched stimuli.
Collapse
Affiliation(s)
- Olga Parshina
- Psychology Department, Middlebury College, Middlebury, VT, USA
| | - Nina Zdorova
- Center for Language and Brain, HSE University, Moscow, Russia
- Institute of Linguistics, Russian Academy of Sciences, Moscow, Russia
| | - Victor Kuperman
- Department of Linguistics & Languages, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
3
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. bioRxiv 2023:2023.01.27.525974. [PMID: 36778376 PMCID: PMC9915492 DOI: 10.1101/2023.01.27.525974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of Bayesian Causal Inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief over (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modeling results, we show that humans report targets as stationary and steer toward their initial rather than final position more often when they are themselves moving, suggesting a misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results confirm both of these predictions. Lastly, analysis of eye-movements show that, while initial saccades toward targets are largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, United States
| | - Johannes Bill
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
- Department of Psychology, Harvard University, Cambridge, MA, United States
| | - Haoran Ding
- Center for Neural Science, New York University, New York City, NY, United States
| | - John Vastola
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, United States
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York City, NY, United States
- Tandon School of Engineering, New York University, New York City, NY, United states
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
- Center for Brain Science, Harvard University, Boston, MA, United States
| |
Collapse
|
4
|
Abstract
INTRODUCTION Atypical visual and social attention has often been associated with clinically diagnosed autism spectrum disorder (ASD), and with the broader autism phenotype. Atypical social attention is of particular research interest given the importance of facial expressions for social communication, with faces tending to attract and hold attention in neurotypical individuals. In autism, this is not necessarily so, where there is debate about the temporal differences in the ability to disengage attention from a face. METHOD Thus, we have used eye-tracking to record saccadic latencies as a measure of time to disengage attention from a central task-irrelevant face before orienting to a newly presented peripheral nonsocial target during a gap-overlap task. Neurotypical participants with higher or lower autism-like traits (AT) completed the task that included central stimuli with varied expressions of facial emotion as well as an inverted face. RESULTS High AT participants demonstrated faster saccadic responses to detect the nonsocial target than low AT participants when disengaging attention from a face. Furthermore, faster saccadic responses were recorded when comparing disengagement from upright to inverted faces in low AT but not in high AT participants. CONCLUSIONS Together, these results extend findings of atypical social attention disengagement in autism and highlight how differences in attention to faces in the broader autism phenotype can lead to apparently superior task performance under certain conditions. Specifically, autism traits were linked to faster attention orienting to a nonsocial target due to the reduced attentional hold of the task irrelevant face stimuli. The absence of an inversion effect in high AT participants also reinforces the suggestion that they process upright or inverted faces similarly, unlike low AT participants for whom inverted faces are thought to be less socially engaging, thus allowing faster disengagement.
Collapse
Affiliation(s)
- Saxon Goold
- School of Psychology and Public Health, La Trobe University, Melbourne, Australia
| | - Melanie J Murphy
- School of Psychology and Public Health, La Trobe University, Melbourne, Australia
| | - Melvyn A Goodale
- Western Institute for Neuroscience, The University of Western Ontario, Ontario, Canada
| | - Sheila G Crewther
- School of Psychology and Public Health, La Trobe University, Melbourne, Australia
| | - Robin Laycock
- School of Psychology and Public Health, La Trobe University, Melbourne, Australia.,School of Health and Biomedical Science, RMIT University, Melbourne, Australia
| |
Collapse
|
5
|
Helo A, Guerra E, Coloma CJ, Aravena-Bravo P, Rämä P. Do Children With Developmental Language Disorder Activate Scene Knowledge to Guide Visual Attention? Effect of Object-Scene Inconsistencies on Gaze Allocation. Front Psychol 2022; 12:796459. [PMID: 35069387 PMCID: PMC8776641 DOI: 10.3389/fpsyg.2021.796459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2021] [Accepted: 12/09/2021] [Indexed: 12/03/2022] Open
Abstract
Our visual environment is highly predictable in terms of where and in which locations objects can be found. Based on visual experience, children extract rules about visual scene configurations, allowing them to generate scene knowledge. Similarly, children extract the linguistic rules from relatively predictable linguistic contexts. It has been proposed that the capacity of extracting rules from both domains might share some underlying cognitive mechanisms. In the present study, we investigated the link between language and scene knowledge development. To do so, we assessed whether preschool children (age range = 5;4–6;6) with Developmental Language Disorder (DLD), who present several difficulties in the linguistic domain, are equally attracted to object-scene inconsistencies in a visual free-viewing task in comparison with age-matched children with Typical Language Development (TLD). All children explored visual scenes containing semantic (e.g., soap on a breakfast table), syntactic (e.g., bread on the chair back), or both inconsistencies (e.g., soap on the chair back). Since scene knowledge interacts with image properties (i.e., saliency) to guide gaze allocation during visual exploration from the early stages of development, we also included the objects’ saliency rank in the analysis. The results showed that children with DLD were less attracted to semantic and syntactic inconsistencies than children with TLD. In addition, saliency modulated syntactic effect only in the group of children with TLD. Our findings indicate that children with DLD do not activate scene knowledge to guide visual attention as efficiently as children with TLD, especially at the syntactic level, suggesting a link between scene knowledge and language development.
Collapse
Affiliation(s)
- Andrea Helo
- Departamento de Fonoaudiología, Facultad de Medicina, Universidad de Chile, Santiago, Chile.,Departamento de Neurociencias, Facultad de Medicina, Universidad de Chile, Santiago, Chile.,Centro de Investigación Avanzada en Educación, Instituto de Educación-IE, Universidad de Chile, Santiago, Chile
| | - Ernesto Guerra
- Centro de Investigación Avanzada en Educación, Instituto de Educación-IE, Universidad de Chile, Santiago, Chile
| | - Carmen Julia Coloma
- Departamento de Fonoaudiología, Facultad de Medicina, Universidad de Chile, Santiago, Chile.,Centro de Investigación Avanzada en Educación, Instituto de Educación-IE, Universidad de Chile, Santiago, Chile
| | - Paulina Aravena-Bravo
- Departamento de Fonoaudiología, Facultad de Medicina, Universidad de Chile, Santiago, Chile.,Escuela de Psicología, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Pia Rämä
- Integrative Neuroscience and Cognition Center (UMR 8002), CNRS, Université Paris Descartes, Paris, France
| |
Collapse
|
6
|
Udale R, Tran MT, Manohar S, Husain M. Dynamic in-flight shifts of working memory resources across saccades. J Exp Psychol Hum Percept Perform 2022; 48:21-36. [PMID: 35073141 PMCID: PMC8785606 DOI: 10.1037/xhp0000960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 08/06/2021] [Accepted: 08/13/2021] [Indexed: 11/08/2022]
Abstract
Little is known about how memory resources are allocated in natural vision across sequential eye movements and fixations, as people actively extract information from the visual environment. Here, we used gaze-contingent eye tracking to examine how such resources are dynamically reallocated from old to new information entering working memory. As participants looked sequentially at items, we interrupted the process at different times by extinguishing the display as a saccade was initiated. After a brief interval, participants were probed on one of the items that had been presented. Paradoxically, across all experiments, the final (unfixated) saccade target was recalled more precisely when more items had previously been fixated, that is, with longer rather than shorter saccade sequences. This result is difficult to explain on current models of working memory because recall error, even for the final item, is typically higher as memory load increases. The findings could however be accounted for by a model that describes how resources are dynamically reallocated on a moment-by-moment basis. During each saccade, the target is encoded by consuming a proportion of currently available resources from a limited working memory, as well as by reallocating resources away from previously encoded items. These findings reveal how working memory resources are shifted across memoranda in active vision. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
Affiliation(s)
- Rob Udale
- Department of Psychology, University of Sheffield
| | - Moc Tram Tran
- Department of Experimental Psychology, University of Oxford
| | - Sanjay Manohar
- Department of Experimental Psychology, University of Oxford
| | - Masud Husain
- Department of Experimental Psychology, University of Oxford
| |
Collapse
|
7
|
De Lillo M, Foley R, Fysh MC, Stimson A, Bradford EEF, Woodrow-Hill C, Ferguson HJ. Tracking developmental differences in real-world social attention across adolescence, young adulthood and older adulthood. Nat Hum Behav 2021; 5:1381-90. [PMID: 33986520 DOI: 10.1038/s41562-021-01113-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 04/12/2021] [Indexed: 02/03/2023]
Abstract
Detecting and responding appropriately to social information in one's environment is a vital part of everyday social interactions. Here, we report two preregistered experiments that examine how social attention develops across the lifespan, comparing adolescents (10-19 years old), young (20-40 years old) and older (60-80 years old) adults. In two real-world tasks, participants were immersed in different social interaction situations-a face-to-face conversation and navigating an environment-and their attention to social and non-social content was recorded using eye-tracking glasses. The results revealed that, compared with young adults, adolescents and older adults attended less to social information (that is, the face) during face-to-face conversation, and to people when navigating the real world. Thus, we provide evidence that real-world social attention undergoes age-related change, and these developmental differences might be a key mechanism that influences theory of mind among adolescents and older adults, with potential implications for predicting successful social interactions in daily life.
Collapse
|
8
|
Cardoso FDSL, Afonso J, Roca A, Teoldo I. The association between perceptual-cognitive processes and response time in decision making in young soccer players. J Sports Sci 2020; 39:926-935. [PMID: 33287653 DOI: 10.1080/02640414.2020.1851901] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
In soccer, it is relevant to understand the roles of Systems 1 (intuitive) and 2 (deliberative) in perceptual-cognitive processes and how they influence response time when making decisions. The aim of this study was to analyse how response time in decision making managed by Systems 1 and 2 is associated to the perceptual-cognitive processes of young soccer players. Ninety young soccer players participated. Perceptual-cognitive processes were assessed through visual search strategies, cognitive effort, and verbal reports. Participants wore a mobile-eye tracking system while viewing 11-a-side match play video-based soccer simulations. Response time in decision making was used to create two sub-groups: faster and slower decision-makers. Results indicated that players with faster response time in decision making employed more fixations of shorter duration, displayed less cognitive effort, as well as a greater number of thought processes associated with planning. These results reinforce that there are differences in the way of using the perceptive-cognitive processes from the priority system in the decision-making process. It is concluded that faster decision making, managed by System 1, implies greater ability to employ visual search strategies and to process information, thus enabling increased cognitive efficiency.
Collapse
Affiliation(s)
- Felippe da Silva Leite Cardoso
- Department of Physical Education, Centre of Research and Studies in Soccer (NUPEF) - Universidade Federal de Viçosa, Viçosa, Brazil
| | | | - André Roca
- Expert Performance and Skill Acquisition Research Group, Faculty of Sport, Allied Health and Performance Science, St Mary's University, London, UK
| | - Israel Teoldo
- Department of Physical Education, Centre of Research and Studies in Soccer (NUPEF) - Universidade Federal de Viçosa, Viçosa, Brazil
| |
Collapse
|
9
|
Vasilev MR, Parmentier FB, Kirkby JA. Distraction by auditory novelty during reading: Evidence for disruption in saccade planning, but not saccade execution. Q J Exp Psychol (Hove) 2020; 74:826-842. [PMID: 33283659 PMCID: PMC8054167 DOI: 10.1177/1747021820982267] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Novel or unexpected sounds that deviate from an otherwise repetitive
sequence of the same sound cause behavioural distraction. Recent work
has suggested that distraction also occurs during reading as fixation
durations increased when a deviant sound was presented at the fixation
onset of words. The present study tested the hypothesis that this
increase in fixation durations occurs due to saccadic inhibition. This
was done by manipulating the temporal onset of sounds relative to the
fixation onset of words in the text. If novel sounds cause saccadic
inhibition, they should be more distracting when presented during the
second half of fixations when saccade programming usually takes place.
Participants read single sentences and heard a 120 ms sound when they
fixated five target words in the sentence. On most occasions
(p = .9), the same sine wave tone was presented
(“standard”), while on the remaining occasions (p =
.1) a new sound was presented (“novel”). Critically, sounds were
played, on average, either during the first half of the fixation (0 ms
delay) or during the second half of the fixation (120 ms delay).
Consistent with the saccadic inhibition hypothesis (SIH), novel sounds
led to longer fixation durations in the 120 ms compared to the 0 ms
delay condition. However, novel sounds did not generally influence the
execution of the subsequent saccade. These results suggest that
unexpected sounds have a rapid influence on saccade planning, but not
saccade execution.
Collapse
Affiliation(s)
| | - Fabrice Br Parmentier
- Department of Psychology and Research Institute for Health Sciences (iUNICS), University of the Balearic Islands, Palma, Spain.,Balearic Islands Health Research Institute (IdISBa), Palma, Spain.,School of Psychology, The University of Western Australia, Perth, WA, Australia
| | - Julie A Kirkby
- Department of Psychology, Bournemouth University, Poole, UK
| |
Collapse
|
10
|
MacInnes WJ, Jóhannesson ÓI, Chetverikov A, Kristjánsson Á. No Advantage for Separating Overt and Covert Attention in Visual Search. Vision (Basel) 2020; 4:E28. [PMID: 32443506 PMCID: PMC7356832 DOI: 10.3390/vision4020028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 04/02/2020] [Accepted: 05/10/2020] [Indexed: 11/22/2022] Open
Abstract
We move our eyes roughly three times every second while searching complex scenes, but covert attention helps to guide where we allocate those overt fixations. Covert attention may be allocated reflexively or voluntarily, and speeds the rate of information processing at the attended location. Reducing access to covert attention hinders performance, but it is not known to what degree the locus of covert attention is tied to the current gaze position. We compared visual search performance in a traditional gaze-contingent display, with a second task where a similarly sized contingent window is controlled with a mouse, allowing a covert aperture to be controlled independently by overt gaze. Larger apertures improved performance for both the mouse- and gaze-contingent trials, suggesting that covert attention was beneficial regardless of control type. We also found evidence that participants used the mouse-controlled aperture somewhat independently of gaze position, suggesting that participants attempted to untether their covert and overt attention when possible. This untethering manipulation, however, resulted in an overall cost to search performance, a result at odds with previous results in a change blindness paradigm. Untethering covert and overt attention may therefore have costs or benefits depending on the task demands in each case.
Collapse
Affiliation(s)
- W. Joseph MacInnes
- School of Psychology, National Research University Higher School of Economics, Moscow 101000, Russia;
- Vision Modelling Lab, Faculty of Social Sciences, National Research University Higher School of Economics, Moscow 101000, Russia
| | - Ómar I. Jóhannesson
- Icelandic Vision Laboratory, Department of Psychology, University of Iceland, 102 Reykjavik, Iceland;
| | - Andrey Chetverikov
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6525 EN Nijmegen, The Netherlands;
| | - Árni Kristjánsson
- School of Psychology, National Research University Higher School of Economics, Moscow 101000, Russia;
- Icelandic Vision Laboratory, Department of Psychology, University of Iceland, 102 Reykjavik, Iceland;
| |
Collapse
|
11
|
Kim K, Lee Y, Lee JH. The Effect of Eye-Feedback Training on Orienting Attention in Young Adults With Sluggish Cognitive Tempo. Front Psychiatry 2020; 11:184. [PMID: 32256408 PMCID: PMC7090146 DOI: 10.3389/fpsyt.2020.00184] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Accepted: 02/25/2020] [Indexed: 11/26/2022] Open
Abstract
Sluggish cognitive tempo (SCT) is a kind of attentional symptoms characterized by symptoms of slowness in behavior or in thinking. The aim of the present study was to develop a preliminary attention training program based on real-time eye-gaze feedback using an eye-tracker. A total of 38 participants with SCT were randomly assigned to one of following two conditions: eye-feedback (N = 19; Mean Age = 21.21; range 18-26) or control (N = 19; Mean Age = 20.68; range 18-25). The participants in the eye-feedback condition received three repeated trainings on the modified version of the Posner's spatial cueing test; we also used real-time constant eye-gaze feedback designed to lead the participants to quickly and accurately engage and to disengage, with pre- and post- measurement of eye-movements (overt attention) and the revised attention network test (ANT-R; covert attention). The participants in the control condition received three repeated same trainings without any feedback, with pre- and post-measurement of eye-movements measure and ANT-R. The results revealed that the eye-feedback group showed a greater improvement in engaging and disengaging attention through the overt attention measure than the control group. The eye-feedback group also showed a greater increase only in the orienting network related to disengaging attention in the covert attention measure compared to the control group. These results suggested that the eye-feedback can be meaningfully used in attention training to enhance the efficiency of attention in clinical settings.
Collapse
Affiliation(s)
- Kiho Kim
- Department of Psychology of Counseling, Sejong Cyber University, Seoul, South Korea
| | - Youna Lee
- Department of Image Engineering, Chung-Ang University, Seoul, South Korea
| | - Jang-Han Lee
- Clinical Neuro-Psychology Lab., Department of Psychology, Chung-Ang University, Seoul, South Korea
| |
Collapse
|
12
|
Mirault J, Yeaton J, Broqua F, Dufau S, Holcomb PJ, Grainger J. Parafoveal-on-foveal repetition effects in sentence reading: A co-registered eye-tracking and electroencephalogram study. Psychophysiology 2020; 57:e13553. [PMID: 32091627 PMCID: PMC7507185 DOI: 10.1111/psyp.13553] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2019] [Revised: 01/27/2020] [Accepted: 02/01/2020] [Indexed: 11/29/2022]
Abstract
When reading, can the next word in the sentence (word n + 1) influence how you read the word you are currently looking at (word n)? Serial models of sentence reading state that this generally should not be the case, whereas parallel models predict that this should be the case. Here we focus on perhaps the simplest and the strongest Parafoveal‐on‐Foveal (PoF) manipulation: word n + 1 is either the same as word n or a different word. Participants read sentences for comprehension and when their eyes left word n, the repeated or unrelated word at position n + 1 was swapped for a word that provided a syntactically correct continuation of the sentence. We recorded electroencephalogram and eye‐movements, and time‐locked the analysis of fixation‐related potentials (FRPs) to fixation of word n. We found robust PoF repetition effects on gaze durations on word n, and also on the initial landing position on word n. Most important is that we also observed significant effects in FRPs, reaching significance at 260 ms post‐fixation of word n. Repetition of the target word n at position n + 1 caused a widely distributed reduced negativity in the FRPs. Given the timing of this effect, we argue that it is driven by orthographic processing of word n + 1, while readers were still looking at word n, plus the spatial integration of orthographic information extracted from these two words in parallel. A hotly debated issue in reading research concerns the serial versus parallel nature of word identification processes. Evidence in favor of parallel processing has been criticized because it has often been obtained with artificial reading paradigms. Here we show that fixation‐related potentials elicited by a target word embedded in a normal sentence are impacted by the nature of the word immediately to the right of the fixated target word (n), and that this impact became significant 260 ms after fixation of word n. This is in line with parallel orthographic processing across adjacent words during sentence reading.
Collapse
Affiliation(s)
- Jonathan Mirault
- Laboratoire de Psychologie Cognitive, Centre National de la Recherche Scientifique, Aix-Marseille University, Marseille, France
| | - Jeremy Yeaton
- Laboratoire de Psychologie Cognitive, Centre National de la Recherche Scientifique, Aix-Marseille University, Marseille, France
| | - Fanny Broqua
- Laboratoire de Psychologie Cognitive, Centre National de la Recherche Scientifique, Aix-Marseille University, Marseille, France
| | - Stéphane Dufau
- Laboratoire de Psychologie Cognitive, Centre National de la Recherche Scientifique, Aix-Marseille University, Marseille, France.,Institute for Language, Aix-Marseille University, Aix-en-Provence, France
| | - Phillip J Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Jonathan Grainger
- Laboratoire de Psychologie Cognitive, Centre National de la Recherche Scientifique, Aix-Marseille University, Marseille, France.,Institute for Language, Aix-Marseille University, Aix-en-Provence, France
| |
Collapse
|
13
|
Laycock R, Wood K, Wright A, Crewther SG, Goodale MA. Saccade Latency Provides Evidence for Reduced Face Inversion Effects With Higher Autism Traits. Front Hum Neurosci 2020; 13:470. [PMID: 32038202 PMCID: PMC6992588 DOI: 10.3389/fnhum.2019.00470] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Accepted: 12/23/2019] [Indexed: 11/13/2022] Open
Abstract
Individuals on the autism spectrum are reported to show impairments in the processing of social information, including aspects of eye-movements towards faces. Abnormalities in basic-level visual processing are also reported. In the current study, we sought to determine if the latency of saccades made towards social targets (faces) in a natural scene as opposed to inanimate targets (cars) would be related to sub-clinical autism traits (ATs) in individuals drawn from a neurotypical population. The effect of stimulus inversion was also examined given that difficulties with processing inverted faces are thought to be a function of face expertise. No group differences in saccadic latency were established for face or car targets, regardless of image orientation. However, as expected, we found that individuals with higher autism-like traits did not demonstrate a saccadic face inversion effect, but those with lower autism-like traits did. Neither group showed a car inversion effect. Thus, these results suggest that neurotypical individuals with high autism-like traits also show anomalies in detecting and orienting to faces. In particular, the reduced saccadic face inversion effect established in these participants with high ATs suggests that speed of visual processing and orienting towards faces may be associated with the social difficulties found across the broader autism spectrum.
Collapse
Affiliation(s)
- Robin Laycock
- School of Health and Biomedical Sciences, RMIT University, Melbourne, VIC, Australia.,School of Psychology and Public Health, La Trobe University, Melbourne, VIC, Australia
| | - Kylie Wood
- School of Psychology and Public Health, La Trobe University, Melbourne, VIC, Australia
| | - Andrea Wright
- School of Psychology and Public Health, La Trobe University, Melbourne, VIC, Australia
| | - Sheila G Crewther
- School of Psychology and Public Health, La Trobe University, Melbourne, VIC, Australia
| | - Melvyn A Goodale
- The Brain and Mind Institute, The University of Western Ontario, London, ON, Canada
| |
Collapse
|
14
|
Nakamura C, Arai M, Hirose Y, Flynn S. An Extra Cue Is Beneficial for Native Speakers but Can Be Disruptive for Second Language Learners: Integration of Prosody and Visual Context in Syntactic Ambiguity Resolution. Front Psychol 2020; 10:2835. [PMID: 31998172 PMCID: PMC6965364 DOI: 10.3389/fpsyg.2019.02835] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2019] [Accepted: 12/02/2019] [Indexed: 11/13/2022] Open
Abstract
It has long been debated whether non-native speakers can process sentences in the same way as native speakers do or they suffer from certain qualitative deficit in their ability of language comprehension. The current study examined the influence of prosodic and visual information in processing sentences with a temporarily ambiguous prepositional phrase ("Put the cake on the plate in the basket") with native English speakers and Japanese learners of English. Specifically, we investigated (1) whether native speakers assign different pragmatic functions to the same prosodic cues used in different contexts and (2) whether L2 learners can reach the correct analysis by integrating prosodic cues with syntax with reference to the visually presented contextual information. The results from native speakers showed that contrastive accents helped to resolve the referential ambiguity when a contrastive pair was present in visual scenes. However, without a contrastive pair in the visual scene, native speakers were slower to reach the correct analysis with the contrastive accent, which supports the view that the pragmatic function of intonation categories are highly context dependent. The results from L2 learners showed that visually presented context alone helped L2 learners to reach the correct analysis. However, L2 learners were unable to assign contrastive meaning to the prosodic cues when there were two potential referents in the visual scene. The results suggest that L2 learners are not capable of integrating multiple sources of information in an interactive manner during real-time language comprehension.
Collapse
Affiliation(s)
- Chie Nakamura
- Global Center for Science and Engineering, Waseda University, Tokyo, Japan
- Department of Linguistics, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Manabu Arai
- Faculty of Economics, Seijo University, Tokyo, Japan
| | - Yuki Hirose
- Department of Language and Information Sciences, University of Tokyo, Tokyo, Japan
| | - Suzanne Flynn
- Department of Linguistics, Massachusetts Institute of Technology, Cambridge, MA, United States
| |
Collapse
|
15
|
Mehta ND, Won MJ, Babin SL, Patel SS, Wassef AA, Chuang AZ, Sereno AB. Differential benefits of olanzapine on executive function in schizophrenia patients: Preliminary findings. Hum Psychopharmacol 2020; 35:e2718. [PMID: 31837056 DOI: 10.1002/hup.2718] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/02/2019] [Revised: 11/12/2019] [Accepted: 11/14/2019] [Indexed: 11/10/2022]
Abstract
OBJECTIVE Schizophrenia patients show executive function (EF) impairments in voluntary orienting as measured by eye-movements. We tested 14 inpatients to investigate the effects of the antipsychotic olanzapine on EF, as measured by antisaccade eye-movement performance. METHODS Patients were tested at baseline (before olanzapine), 3-5 days post-medication, and 12-14 days post-medication. Patients were also assessed on the Positive and Negative Syndrome Scale (PANSS) to measure the severity of schizophrenia-related symptoms, and administered the Stroop task, a test of EF. Nine matched controls were also tested on the antisaccade and Stroop. RESULTS Both groups showed improvement on Stroop and antisaccade; however, the schizophrenia group improved significantly more on antisaccade, indicating an additional benefit of olanzapine on EF performance. Patients with poorer baseline antisaccade performance (High-Deficit) showed significantly greater improvement on the antisaccade task than patients with better baseline performance (Low-Deficit), suggesting that baseline EF impairment predicts the magnitude of cognitive improvement with olanzapine. These subgroups showed significant and equivalent improvement on PANSS scores, indicating that improvement on the antisaccade task with olanzapine was not a result of differences in magnitude of clinical improvement. CONCLUSIONS This preliminary study provides evidence that olanzapine may be most advantageous for patients with greater baseline EF deficits.
Collapse
Affiliation(s)
- Neeti D Mehta
- Department of Neurobiology and Anatomy, University of Texas Health Science Center at Houston, Houston, Texas.,Rice University, Houston, Texas
| | - Michelle J Won
- Department of Neurobiology and Anatomy, University of Texas Health Science Center at Houston, Houston, Texas.,Rice University, Houston, Texas
| | - Shelly L Babin
- Department of Neurobiology and Anatomy, University of Texas Health Science Center at Houston, Houston, Texas
| | - Saumil S Patel
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas
| | - Adel A Wassef
- Department of Psychiatry, University of Texas Health Science Center at Houston, Houston, Texas
| | - Alice Z Chuang
- Department of Ophthalmology and Visual Science, University of Texas Health Science Center at Houston, Houston, Texas
| | - Anne B Sereno
- Department of Neurobiology and Anatomy, University of Texas Health Science Center at Houston, Houston, Texas.,Department of Psychological Sciences, Purdue University, Indiana.,Weldon School of Biomedical Engineering, Purdue University, Indiana
| |
Collapse
|
16
|
Jones PR, Smith ND, Bi W, Crabb DP. Portable Perimetry Using Eye-Tracking on a Tablet Computer-A Feasibility Assessment. Transl Vis Sci Technol 2019; 8:17. [PMID: 30740267 PMCID: PMC6364754 DOI: 10.1167/tvst.8.1.17] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Accepted: 11/22/2018] [Indexed: 01/13/2023] Open
Abstract
Purpose Visual field (VF) examination by standard automated perimetry (SAP) is an important method of clinical assessment. However, the complexity of the test, and its use of bulky, expensive equipment makes it impractical for case-finding. We propose and evaluate a new approach to paracentral VF assessment that combines an inexpensive eye-tracker with a portable tablet computer (“Eyecatcher”). Methods Twenty-four eyes from 12 glaucoma patients, and 12 eyes from six age-similar controls were examined. Participants were tested monocularly (once per eye), with both the novel Eyecatcher test and traditional SAP (HFA SITA standard 24-2). For Eyecatcher, the participant's task was to simply to look at a sequence of fixed-luminance dots, presented relative to the current point of fixation. Start and end fixations were used to determine locations where stimuli were seen/unseen, and to build a continuous map of sensitivity loss across a VF of approximately 20°. Results Eyecatcher was able to clearly separate patients from controls, and the results were consistent with those from traditional SAP. In particular, mean Eyecatcher scores were strongly correlated with mean deviation scores (r2 = 0.64, P < 0.001), and there was good concordance between corresponding VF locations (∼84%). Participants reported that Eyecatcher was more enjoyable, easier to perform, and less tiring than SAP (all P < 0.001). Conclusions Portable perimetry using an inexpensive eye-tracker and a tablet computer is feasible, although possible means of improvement are suggested. Translational Relevance Such a test could have significant utility as a case finding device.
Collapse
Affiliation(s)
- Pete R Jones
- Division of Optometry and Visual Science, School of Health Sciences, City, University of London, London, UK
| | - Nicholas D Smith
- Division of Optometry and Visual Science, School of Health Sciences, City, University of London, London, UK
| | - Wei Bi
- Division of Optometry and Visual Science, School of Health Sciences, City, University of London, London, UK
| | - David P Crabb
- Division of Optometry and Visual Science, School of Health Sciences, City, University of London, London, UK
| |
Collapse
|
17
|
Abstract
Oddball studies have shown that sounds unexpectedly deviating from an otherwise repeated sequence capture attention away from the task at hand. While such distraction is typically regarded as potentially important in everyday life, previous work has so far not examined how deviant sounds affect performance on more complex daily tasks. In this study, we developed a new method to examine whether deviant sounds can disrupt reading performance by recording participants’ eye movements. Participants read single sentences in silence and while listening to task-irrelevant sounds. In the latter condition, a 50-ms sound was played contingent on the fixation of five target words in the sentence. On most occasions, the same tone was presented (standard sound), whereas on rare and unexpected occasions it was replaced by white noise (deviant sound). The deviant sound resulted in significantly longer fixation durations on the target words relative to the standard sound. A time-course analysis showed that the deviant sound began to affect fixation durations around 180 ms after fixation onset. Furthermore, deviance distraction was not modulated by the lexical frequency of target words. In summary, fixation durations on the target words were longer immediately after the presentation of the deviant sound, but there was no evidence that it interfered with the lexical processing of these words. The present results are in line with the recent proposition that deviant sounds yield a temporary motor suppression and suggest that deviant sounds likely inhibit the programming of the next saccade.
Collapse
Affiliation(s)
| | - Fabrice Br Parmentier
- 2 University of the Balearic Islands, Department of Psychology and Research Institute for Health Sciences (iUNICS), Palma, Spain.,3 Balearic Islands Health Research Institute (IdISBa), Palma, Spain.,4 University of Western Australia, School of Psychology, Perth, WA, Australia
| | - Bernhard Angele
- 1 Bournemouth University, Department of Psychology, Poole, UK
| | - Julie A Kirkby
- 1 Bournemouth University, Department of Psychology, Poole, UK
| |
Collapse
|
18
|
Paoletti D, Braun C, Vargo EJ, van Zoest W. Spontaneous pre-stimulus oscillatory activity shapes the way we look: A concurrent imaging and eye-movement study. Eur J Neurosci 2018; 49:137-149. [PMID: 30472776 DOI: 10.1111/ejn.14285] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2018] [Revised: 11/13/2018] [Accepted: 11/16/2018] [Indexed: 11/30/2022]
Abstract
Previous behavioural studies have accrued evidence that response time plays a critical role in determining whether selection is influenced by stimulus saliency or target template. In the present work, we investigated to what extent the variations in timing and consequent oculomotor controls are influenced by spontaneous variations in pre-stimulus alpha oscillations. We recorded simultaneously brain activity using magnetoencephalography (MEG) and eye movements while participants performed a visual search task. Our results show that slower saccadic reaction times were predicted by an overall stronger alpha power in the 500 ms time window preceding the stimulus onset, while weaker alpha power was a signature of faster responses. When looking separately at performance for fast and slow responses, we found evidence for two specific sources of alpha activity predicting correct versus incorrect responses. When saccades were quickly elicited, errors were predicted by stronger alpha activity in posterior areas, comprising the angular gyrus in the temporal-parietal junction (TPJ) and possibly the lateral intraparietal area (LIP). Instead, when participants were slower in responding, an increase of alpha power in frontal eye fields (FEF), supplementary eye fields (SEF) and dorsolateral pre-frontal cortex (DLPFC) predicted erroneous saccades. In other words, oculomotor accuracy in fast responses was predicted by alpha power differences in more posterior areas, while the accuracy in slow responses was predicted by alpha power differences in frontal areas, in line with the idea that these areas may be differentially related to stimulus-driven and goal-driven control of selection.
Collapse
Affiliation(s)
- Davide Paoletti
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Christoph Braun
- MEG-Center, University of Tübingen, Tübingen, Germany.,Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy
| | | | - Wieske van Zoest
- Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy.,School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
19
|
Sturt P, Kwon N. Processing Information During Regressions: An Application of the Reverse Boundary-Change Paradigm. Front Psychol 2018; 9:1630. [PMID: 30233466 PMCID: PMC6132172 DOI: 10.3389/fpsyg.2018.01630] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Accepted: 08/14/2018] [Indexed: 11/23/2022] Open
Abstract
Although 10–15% of eye-movements during reading are regressions, we still know little about the information that is processed during regressive episodes. Here, we report an eye-movement study that uses what we call the reverse boundary change technique to examine the processing of lexical-semantic information during regressions, and to establish the role of this information during recovery from processing difficulty. In the critical condition of the experiment, an initially implausible sentence (e.g., There was an old house that John had ridden when he was a boy) was rendered plausible by changing a context word (house) to a lexical neighbor (horse) using a gaze-contingent display change, at the point where the reader's gaze crossed an invisible boundary further on in the sentence. Due to the initial implausibility of the sentence, readers often launched regressions from the later part of the sentence. However, despite this initial processing difficulty, reading was facilitated, relative to a condition where the display change did not occur (i.e., the word house remained on screen throughout the trial). This result implies that the relevant lexical semantic information was processed during the regression, and was used to aid recovery from the initial processing difficulty.
Collapse
Affiliation(s)
- Patrick Sturt
- Psychology, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Nayoung Kwon
- Department of English Language, Konkuk University, Seoul, South Korea
| |
Collapse
|
20
|
Hsiao YT, Shillcock R, Obregón M, Kreiner H, Roberts MAJ, McDonald S. Differential vergence movements in reading Chinese and English: Greater fixation-initial binocular disparity is advantageous in reading the denser orthography. Q J Exp Psychol (Hove) 2017; 71:1-33. [PMID: 28695758 DOI: 10.1080/17470218.2017.1350866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
We explore two aspects of exovergence: we test whether smaller binocular fixation disparities accompany the shorter saccades and longer fixations observed in reading Chinese; we test whether potentially advantageous psychophysical effects of exovergence (cf. Arnold & Schindel, 2010; Kersten & Murray, 2010) transfer to text reading. We report differential exovergence in reading Chinese and English: Chinese readers begin fixations with more binocular disparity, but end fixations with a disparity closely similar to that of the English readers. We conclude that greater fixation-initial binocular fixation disparity can be adaptive in the reading of visually and cognitively denser text.
Collapse
|
21
|
Abstract
In conversation, turn-taking is usually fluid, with next speakers taking their turn right after the end of the previous turn. Most, but not all, previous studies show that next speakers start to plan their turn early, if possible already during the incoming turn. The present study makes use of the list-completion paradigm (Barthel et al., 2016), analyzing speech onset latencies and eye-movements of participants in a task-oriented dialogue with a confederate. The measures are used to disentangle the contributions to the timing of turn-taking of early planning of content on the one hand and initiation of articulation as a reaction to the upcoming turn-end on the other hand. Participants named objects visible on their computer screen in response to utterances that did, or did not, contain lexical and prosodic cues to the end of the incoming turn. In the presence of an early lexical cue, participants showed earlier gaze shifts toward the target objects and responded faster than in its absence, whereas the presence of a late intonational cue only led to faster response times and did not affect the timing of participants' eye movements. The results show that with a combination of eye-movement and turn-transition time measures it is possible to tease apart the effects of early planning and response initiation on turn timing. They are consistent with models of turn-taking that assume that next speakers (a) start planning their response as soon as the incoming turn's message can be understood and (b) monitor the incoming turn for cues to turn-completion so as to initiate their response when turn-transition becomes relevant.
Collapse
Affiliation(s)
- Mathias Barthel
- Language and Cognition Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
| | - Antje S Meyer
- Psychology of Language Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Radboud UniversityNijmegen, Netherlands
| | - Stephen C Levinson
- Language and Cognition Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Radboud UniversityNijmegen, Netherlands
| |
Collapse
|
22
|
Johnson RL, Starr EL. The Preferred Viewing Location in Top-to-Bottom Sentence Reading. Q J Exp Psychol (Hove) 2017; 71:1-32. [PMID: 28322110 DOI: 10.1080/17470218.2017.1307860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
The preferred viewing location (PVL) is a robust finding in research on reading that when fixating on a word during normal sentence reading, readers tend to land slightly to the left of the center of the word. This is in contrast to the optimal viewing location (OVL) in single word recognition, which falls at the center of the word. The current study outlines the history of the PVL in eye-tracking since Rayner's 1979 original study, documenting the origins of these conflicting theoretical explanations. In addition, a new study is reported examining whether the PVL can be attributed solely to oculomotor error or a processing advantage by using an experimental manipulation that separates tracking direction (left-to-right reading) and landing position (left-to-right within a word). Sentences were presented to participants from the top to the bottom of a computer screen with one word per line while eye movements were recorded. In this presentation format, readers continued to land to the left of center, suggesting that the PVL in normal reading is not solely due to oculomotor error.
Collapse
|
23
|
Barthel M, Sauppe S, Levinson SC, Meyer AS. The Timing of Utterance Planning in Task-Oriented Dialogue: Evidence from a Novel List-Completion Paradigm. Front Psychol 2016; 7:1858. [PMID: 27990127 PMCID: PMC5131015 DOI: 10.3389/fpsyg.2016.01858] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2016] [Accepted: 11/09/2016] [Indexed: 11/13/2022] Open
Abstract
In conversation, interlocutors rarely leave long gaps between turns, suggesting that next speakers begin to plan their turns while listening to the previous speaker. The present experiment used analyses of speech onset latencies and eye-movements in a task-oriented dialogue paradigm to investigate when speakers start planning their responses. German speakers heard a confederate describe sets of objects in utterances that either ended in a noun [e.g., Ich habe eine Tür und ein Fahrrad ("I have a door and a bicycle")] or a verb form [e.g., Ich habe eine Tür und ein Fahrrad besorgt ("I have gotten a door and a bicycle")], while the presence or absence of the final verb either was or was not predictable from the preceding sentence structure. In response, participants had to name any unnamed objects they could see in their own displays with utterances such as Ich habe ein Ei ("I have an egg"). The results show that speakers begin to plan their turns as soon as sufficient information is available to do so, irrespective of further incoming words.
Collapse
Affiliation(s)
- Mathias Barthel
- Language and Cognition Department, Max Planck Institute for Psycholinguistics Nijmegen, Netherlands
| | - Sebastian Sauppe
- Language and Cognition Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands; Department of Comparative Linguistics, University of ZurichZurich, Switzerland
| | - Stephen C Levinson
- Language and Cognition Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands; Donders Institute for Brain, Cognition and Behaviour, Radboud UniversityNijmegen, Netherlands
| | - Antje S Meyer
- Donders Institute for Brain, Cognition and Behaviour, Radboud UniversityNijmegen, Netherlands; Psychology of Language Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
| |
Collapse
|
24
|
Deutsch A, Velan H, Michaly T. Decomposition in a non-concatenated morphological structure involves more than just the roots: Evidence from fast priming. Q J Exp Psychol (Hove) 2016; 71:1-9. [PMID: 27759501 DOI: 10.1080/17470218.2016.1250788] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Complex words in Hebrew are composed of two non-concatenated morphemes: a consonantal root embedded in a nominal or verbal word-pattern morpho-phonological unit made up of vowels or vowels and consonants. Research on written-word recognition has revealed a robust effect of the roots and the verbal-patterns, but not of the nominal-patterns, on word recognition. These findings suggest that the Hebrew lexicon is organized and accessed via roots. We explored the hypothesis that the absence of a nominal-pattern effect reflects methodological limitations of the experimental paradigms used in previous studies. Specifically, the potential facilitative effect induced by a shared nominal-pattern was counteracted by an interference effect induced by the competition between the roots of two words derived from different roots but with the same nominal-pattern. In the current study, a fast-priming paradigm for sentence reading and a "delayed-letters" procedure were used to isolate the initial effect of nominal-patterns on lexical access. The results, based on eye-fixation latency, demonstrated a facilitatory effect induced by nominal-pattern primes relative to orthographic control primes when presented for 33 or 42 ms. The results are discussed in relation to the role of the word-pattern as an organizing principle of the Hebrew lexicon, together with the roots.
Collapse
Affiliation(s)
- Avital Deutsch
- a Seymour Fox School of Education , The Hebrew University of Jerusalem , Jerusalem , Israel
| | - Hadas Velan
- a Seymour Fox School of Education , The Hebrew University of Jerusalem , Jerusalem , Israel
- b Levinsky College of Education , Tel-Aviv , Israel
| | - Tamar Michaly
- a Seymour Fox School of Education , The Hebrew University of Jerusalem , Jerusalem , Israel
| |
Collapse
|
25
|
Chelnokova O, Laeng B, Løseth G, Eikemo M, Willoch F, Leknes S. The µ-opioid system promotes visual attention to faces and eyes. Soc Cogn Affect Neurosci 2016; 11:1902-1909. [PMID: 27531386 DOI: 10.1093/scan/nsw116] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2016] [Revised: 07/17/2016] [Accepted: 08/10/2016] [Indexed: 12/25/2022] Open
Abstract
Paying attention to others' faces and eyes is a cornerstone of human social behavior. The µ-opioid receptor (MOR) system, central to social reward-processing in rodents and primates, has been proposed to mediate the capacity for affiliative reward in humans. We assessed the role of the human MOR system in visual exploration of faces and eyes of conspecifics. Thirty healthy males received a novel, bidirectional battery of psychopharmacological treatment (an MOR agonist, a non-selective opioid antagonist, or placebo, on three separate days). Eye-movements were recorded while participants viewed facial photographs. We predicted that the MOR system would promote visual exploration of faces, and hypothesized that MOR agonism would increase, whereas antagonism decrease overt attention to the information-rich eye region. The expected linear effect of MOR manipulation on visual attention to the stimuli was observed, such that MOR agonism increased while antagonism decreased visual exploration of faces and overt attention to the eyes. The observed effects suggest that the human MOR system promotes overt visual attention to socially significant cues, in line with theories linking reward value to gaze control and target selection. Enhanced attention to others' faces and eyes represents a putative behavioral mechanism through which the human MOR system promotes social interest.
Collapse
Affiliation(s)
- Olga Chelnokova
- Department of Psychology, University of Oslo, Oslo N-0317, Norway
| | - Bruno Laeng
- Department of Psychology, University of Oslo, Oslo N-0317, Norway
| | - Guro Løseth
- Department of Psychology, University of Oslo, Oslo N-0317, Norway
| | - Marie Eikemo
- Department of Psychology, University of Oslo, Oslo N-0317, Norway.,Norwegian Center for Addiction Research, University of Oslo, Oslo N-0318, Norway.,Division of Mental Health and Addiction, Oslo University Hospital, Oslo N-0318, Norway
| | - Frode Willoch
- Department of Medicine, University of Oslo, Oslo N-0316, Norway
| | - Siri Leknes
- Department of Psychology, University of Oslo, Oslo N-0317, Norway.,Department of Medicine, University of Oslo, Oslo N-0316, Norway.,The Intervention Centre, Oslo University Hospital, Oslo N-0424, Norway
| |
Collapse
|
26
|
Piccardi L, De Luca M, Nori R, Palermo L, Iachini F, Guariglia C. Navigational Style Influences Eye Movement Pattern during Exploration and Learning of an Environmental Map. Front Behav Neurosci 2016; 10:140. [PMID: 27445735 PMCID: PMC4925711 DOI: 10.3389/fnbeh.2016.00140] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2016] [Accepted: 06/16/2016] [Indexed: 11/20/2022] Open
Abstract
During navigation people may adopt three different spatial styles (i.e., Landmark, Route, and Survey). Landmark style (LS) people are able to recall familiar landmarks but cannot combine them with directional information; Route style (RS) people connect landmarks to each other using egocentric information about direction; Survey style (SS) people use a map-like representation of the environment. SS individuals generally navigate better than LS and RS people. Fifty-one college students (20 LS; 17 RS, and 14 SS) took part in the experiment. The spatial cognitive style (SCS) was assessed by means of the SCS test; participants then had to learn a schematic map of a city, and after 5 min had to recall the path depicted on it. During the learning and delayed recall phases, eye-movements were recorded. Our intent was to investigate whether there is a peculiar way to explore an environmental map related to the individual’s spatial style. Results support the presence of differences in the strategy used by the three spatial styles for learning the path and its delayed recall. Specifically, LS individuals produced a greater number of fixations of short duration, while the opposite eye movement pattern characterized SS individuals. Moreover, SS individuals showed a more spread and comprehensive explorative pattern of the map, while LS individuals focused their exploration on the path and related targets. RS individuals showed a pattern of exploration at a level of proficiency between LS and SS individuals. We discuss the clinical and anatomical implications of our data.
Collapse
Affiliation(s)
- Laura Piccardi
- Department of Life, Health and Environmental Science, University of L'AquilaL'Aquila, Italy; Neuropsychology Unit, IRCCS Fondazione Santa LuciaRome, Italy
| | - Maria De Luca
- Neuropsychology Unit, IRCCS Fondazione Santa Lucia Rome, Italy
| | - Raffaella Nori
- Department of Psychology, University of Bologna Bologna, Italy
| | - Liana Palermo
- Neuropsychology Unit, IRCCS Fondazione Santa LuciaRome, Italy; Department of Medical and Surgical Science, University Magna GraeciaCatanzaro, Italy
| | - Fabiana Iachini
- Department of Life, Health and Environmental Science, University of L'Aquila L'Aquila, Italy
| | - Cecilia Guariglia
- Neuropsychology Unit, IRCCS Fondazione Santa LuciaRome, Italy; Department of Psychology, "Sapienza" University of RomeRome, Italy
| |
Collapse
|
27
|
Cornelissen KK, Cornelissen PL, Hancock PJB, Tovée MJ. Fixation patterns, not clinical diagnosis, predict body size over-estimation in eating disordered women and healthy controls. Int J Eat Disord 2016; 49:507-18. [PMID: 26996142 PMCID: PMC5071724 DOI: 10.1002/eat.22505] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/06/2015] [Revised: 12/11/2015] [Accepted: 12/11/2015] [Indexed: 11/08/2022]
Abstract
OBJECTIVE A core feature of anorexia nervosa (AN) is an over-estimation of body size. Women with AN have a different pattern of eye-movements when judging bodies, but it is unclear whether this is specific to their diagnosis or whether it is found in anyone over-estimating body size. METHOD To address this question, we compared the eye movement patterns from three participant groups while they carried out a body size estimation task: (i) 20 women with recovering/recovered anorexia (rAN) who had concerns about body shape and weight and who over-estimated body size, (ii) 20 healthy controls who had normative levels of concern about body shape and who estimated body size accurately (iii) 20 healthy controls who had normative levels of concern about body shape but who did over-estimate body size. RESULTS Comparisons between the three groups showed that: (i) accurate body size estimators tended to look more in the waist region, and this was independent of clinical diagnosis; (ii) there is a pattern of looking at images of bodies, particularly viewing the upper parts of the torso and face, which is specific to participants with rAN but which is independent of accuracy in body size estimation. DISCUSSION Since the over-estimating controls did not share the same body image concerns that women with rAN report, their over-estimation cannot be explained by attitudinal concerns about body shape and weight. These results suggest that a distributed fixation pattern is associated with over-estimation of body size and should be addressed in treatment programs. © 2016 Wiley Periodicals, Inc. (Int J Eat Disord 2016; 49:507-518).
Collapse
Affiliation(s)
| | | | | | - Martin J. Tovée
- Institute of Neuroscience, Newcastle UniversityTyne and WearUnited Kingdom
| |
Collapse
|
28
|
Sung YT, Tu JY, Cha JH, Wu MD. Processing Preference Toward Object-Extracted Relative Clauses in Mandarin Chinese by L1 and L2 Speakers: An Eye-Tracking Study. Front Psychol 2016; 7:4. [PMID: 26834677 PMCID: PMC4720787 DOI: 10.3389/fpsyg.2016.00004] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2015] [Accepted: 01/03/2016] [Indexed: 11/29/2022] Open
Abstract
The current study employed an eye-movement technique with an attempt to explore the reading patterns for the two types of Chinese relative clauses, subject-extracted relative clauses (SRCs) and object-extracted relative clauses (ORCs), by native speakers (L1), and Japanese learners (L2) of Chinese. The data were analyzed in terms of gaze duration, regression path duration, and regression rate on the two critical regions, head noun, and embedded verb. The results indicated that both the L1 and L2 participants spent less time on the head nouns in ORCs than in SRCs. Also, the L2 participants spent less time on the embedded verbs in ORCs than in SRCs and their regression rate for embedded verbs was generally lower in ORCs than in SRC. The findings showed that the participants experienced less processing difficulty in ORCs than SRCs. These results suggest an ORC preference in L1 and L2 speakers of Chinese, which provides evidence in support of linear distance hypothesis and implies that the syntactic nature of Chinese is at play in the RC processing.
Collapse
Affiliation(s)
- Yao-Ting Sung
- Department of Educational Psychology and Counseling, National Taiwan Normal UniversityTaipei, Taiwan; Center of Learning Technology for Chinese, National Taiwan Normal UniversityTaipei, Taiwan
| | - Jung-Yueh Tu
- International Chinese Education Center, School of Humanities, Shanghai Jiao Tong University Shanghai, China
| | - Jih-Ho Cha
- Center of Learning Technology for Chinese, National Taiwan Normal University Taipei, Taiwan
| | - Ming-Da Wu
- Center of Learning Technology for Chinese, National Taiwan Normal University Taipei, Taiwan
| |
Collapse
|
29
|
Abstract
Studies investigating individual differences in reading ability often involve data sets containing a large number of collinear predictors and a small number of observations. In this paper, we discuss the method of Random Forests and demonstrate its suitability for addressing the statistical concerns raised by such datasets. The method is contrasted with other methods of estimating relative variable importance, especially Dominance Analysis and Multimodel Inference. All methods were applied to a dataset that gauged eye-movements during reading and offline comprehension in the context of multiple ability measures with high collinearity due to their shared verbal core. We demonstrate that the Random Forests method surpasses other methods in its ability to handle model overfitting, and accounts for a comparable or larger amount of variance in reading measures relative to other methods.
Collapse
Affiliation(s)
- Kazunaga Matsuki
- McMaster University, Department of Linguistics and Languages, Togo Salmon Hall, 1280 Main Street West, Hamilton, Ontario, L8S 4M2 Canada,
| | - Victor Kuperman
- McMaster University, Department of Linguistics and Languages, Togo Salmon Hall, 1280 Main Street West, Hamilton, Ontario, L8S 4M2 Canada
| | - Julie A Van Dyke
- Haskins Laboratories, 300 George Street, New Haven, 06511 United States,
| |
Collapse
|
30
|
Hout MC, Godwin HJ, Fitzsimmons G, Robbins A, Menneer T, Goldinger SD. Using multidimensional scaling to quantify similarity in visual search and beyond. Atten Percept Psychophys 2016; 78:3-20. [PMID: 26494381 PMCID: PMC5523409 DOI: 10.3758/s13414-015-1010-6] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual search is one of the most widely studied topics in vision science, both as an independent topic of interest, and as a tool for studying attention and visual cognition. A wide literature exists that seeks to understand how people find things under varying conditions of difficulty and complexity, and in situations ranging from the mundane (e.g., looking for one's keys) to those with significant societal importance (e.g., baggage or medical screening). A primary determinant of the ease and probability of success during search are the similarity relationships that exist in the search environment, such as the similarity between the background and the target, or the likeness of the non-targets to one another. A sense of similarity is often intuitive, but it is seldom quantified directly. This presents a problem in that similarity relationships are imprecisely specified, limiting the capacity of the researcher to examine adequately their influence. In this article, we present a novel approach to overcoming this problem that combines multi-dimensional scaling (MDS) analyses with behavioral and eye-tracking measurements. We propose a method whereby MDS can be repurposed to successfully quantify the similarity of experimental stimuli, thereby opening up theoretical questions in visual search and attention that cannot currently be addressed. These quantifications, in conjunction with behavioral and oculomotor measures, allow for critical observations about how similarity affects performance, information selection, and information processing. We provide a demonstration and tutorial of the approach, identify documented examples of its use, discuss how complementary computer vision methods could also be adopted, and close with a discussion of potential avenues for future application of this technique.
Collapse
|
31
|
Arslan S, Bastiaanse R, Felser C. Looking at the evidence in visual world: eye-movements reveal how bilingual and monolingual Turkish speakers process grammatical evidentiality. Front Psychol 2015; 6:1387. [PMID: 26441762 PMCID: PMC4584937 DOI: 10.3389/fpsyg.2015.01387] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2015] [Accepted: 08/31/2015] [Indexed: 11/18/2022] Open
Abstract
This study presents pioneering data on how adult early bilinguals (heritage speakers) and late bilingual speakers of Turkish and German process grammatical evidentiality in a visual world setting in comparison to monolingual speakers of Turkish. Turkish marks evidentiality, the linguistic reference to information source, through inflectional affixes signaling either direct (-DI) or indirect (-mIş) evidentiality. We conducted an eye-tracking-during-listening experiment where participants were given access to visual ‘evidence’ supporting the use of either a direct or indirect evidential form. The behavioral results indicate that the monolingual Turkish speakers comprehended direct and indirect evidential scenarios equally well. In contrast, both late and early bilinguals were less accurate and slower to respond to direct than to indirect evidentials. The behavioral results were also reflected in the proportions of looks data. That is, both late and early bilinguals fixated less frequently on the target picture in the direct than in the indirect evidential condition while the monolinguals showed no difference between these conditions. Taken together, our results indicate reduced sensitivity to the semantic and pragmatic function of direct evidential forms in both late and early bilingual speakers, suggesting a simplification of the Turkish evidentiality system in Turkish heritage grammars. We discuss our findings with regard to theories of incomplete acquisition and first language attrition.
Collapse
Affiliation(s)
- Seçkin Arslan
- International Doctorate for Experimental Approaches to Language and Brain, University of Groningen Groningen, Netherlands
| | - Roelien Bastiaanse
- Research Group Neurolinguistics, Center for Language and Cognition Groningen (CLCG), University of Groningen Groningen, Netherlands
| | - Claudia Felser
- Potsdam Research Institute for Multilingualism, University of Potsdam Potsdam, Germany
| |
Collapse
|
32
|
Abstract
An important problem verb learners must solve is how to extend verbs. Children could use cross-situational information to guide their extensions, however comparing events is difficult. Two studies test whether children benefit from initially seeing a pair of similar events ('progressive alignment') while learning new verbs, and whether this influence changes with age. In Study 1, 2 ½- and 3 ½-year-old children participated in an interactive task. Children who saw a pair of similar events and then varied events were able to extend verbs at test, differing from a control group; children who saw two pairs of varied events did not differ from the control group. In Study 2, events were presented on a monitor. Following the initial pair of events that varied by condition, a Tobii x120 eye tracker recorded 2 ½-, 3 ½- and 4 ½-year-olds' fixations to specific elements of events (AOIs) during the second pair of events, which were the same across conditions. After seeing the pair of events that were highly similar, 2 ½-year-olds showed significantly longer fixation durations to agents and to affected objects as compared to the all varied condition. At test, 3 ½-year-olds were able to extend the verb, but only in the progressive alignment condition. These results are important because they show children's visual attention to relevant elements in dynamic events is influenced by their prior comparison experience, and they show that young children benefit from seeing similar events as they learn to compare events to each other.
Collapse
Affiliation(s)
| | | | | | - Clare Burch
- Department of Psychology, Trinity University
| | - Gavin Fung
- Department of Psychology, Trinity University
| | | |
Collapse
|
33
|
Juravle G, Velasco C, Salgado-Montejo A, Spence C. The hand grasps the center, while the eyes saccade to the top of novel objects. Front Psychol 2015; 6:633. [PMID: 26052291 PMCID: PMC4441126 DOI: 10.3389/fpsyg.2015.00633] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2014] [Accepted: 04/30/2015] [Indexed: 11/13/2022] Open
Abstract
In the present study, we investigated whether indenting the sides of novel objects (e.g., product packaging) would influence where people grasp, and hence focus their gaze, under the assumption that gaze precedes grasping. In Experiment 1, the participants grasped a selection of custom-made objects designed to resemble typical packaging forms with an indentation in the upper, middle, or lower part. In Experiment 2, eye movements were recorded while the participants viewed differently-sized (small, medium, and large) objects with the same three indentation positions tested in Experiment 1, together with a control object lacking any indentation. The results revealed that irrespective of the location of the indentation, the participants tended to grasp the mid-region of the object, with their index finger always positioned slightly above its midpoint. Importantly, the first visual fixation tended to fall in the cap region of the novel object. The participants also fixated for longer in this region. Furthermore, participants saccaded more often, as well saccading more rapidly when directing their gaze to the upper region of the objects that they were required to inspect visually. Taken together, these results therefore suggest that different spatial locations on target objects are of interest to our eyes and hands.
Collapse
Affiliation(s)
- Georgiana Juravle
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Carlos Velasco
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Alejandro Salgado-Montejo
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- Universidad de La Sabana, Chía, Colombia
| | - Charles Spence
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
34
|
van Leeuwen PM, Gómez i Subils C, Jimenez AR, Happee R, de Winter JCF. Effects of visual fidelity on curve negotiation, gaze behaviour and simulator discomfort. Ergonomics 2015; 58:1347-1364. [PMID: 25693035 DOI: 10.1080/00140139.2015.1005172] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
UNLABELLED Technological developments have led to increased visual fidelity of driving simulators. However, simplified visuals have potential advantages, such as improved experimental control, reduced simulator discomfort and increased generalisability of results. In this driving simulator study, we evaluated the effects of visual fidelity on driving performance, gaze behaviour and subjective discomfort ratings. Twenty-four participants drove a track with 90° corners in (1) a high fidelity, textured environment, (2) a medium fidelity, non-textured environment without scenery objects and (3) a low-fidelity monochrome environment that only showed lane markers. The high fidelity level resulted in higher steering activity on straight road segments, higher driving speeds and higher gaze variance than the lower fidelity levels. No differences were found between the two lower fidelity levels. In conclusion, textures and objects were found to affect steering activity and driving performance; however, gaze behaviour during curve negotiation and self-reported simulator discomfort were unaffected. PRACTITIONER SUMMARY In a driving simulator study, three levels of visual fidelity were evaluated. The results indicate that the highest fidelity level, characterised by a textured environment, resulted in higher steering activity, higher driving speeds and higher variance of horizontal gaze than the two lower fidelity levels without textures.
Collapse
Affiliation(s)
- Peter M van Leeuwen
- a Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology , Mekelweg 2, 2628 CD, Delft , The Netherlands
| | | | | | | | | |
Collapse
|
35
|
Abstract
Recent studies (e.g., Kuhn and Tatler, 2005) have suggested that magic tricks can provide a powerful and compelling domain for the study of attention and perception. In particular, many stage illusions involve attentional misdirection, guiding the observer's gaze to a salient object or event, while another critical action, such as sleight of hand, is taking place. Even if the critical action takes place in full view, people typically fail to see it due to inattentional blindness (IB). In an eye-tracking experiment, participants watched videos of a new magic trick, wherein a coin placed beneath a napkin disappears, reappearing under a different napkin. Appropriately deployed attention would allow participants to detect the "secret" event that underlies the illusion (a moving coin), as it happens in full view and is visible for approximately 550 ms. Nevertheless, we observed high rates of IB. Unlike prior research, eye-movements during the critical event showed different patterns for participants, depending upon whether they saw the moving coin. The results also showed that when participants watched several "practice" videos without any moving coin, they became far more likely to detect the coin in the critical trial. Taken together, the findings are consistent with perceptual load theory (Lavie and Tsal, 1994).
Collapse
Affiliation(s)
- Anthony S. Barnhart
- Cognitive Research Lab, Department of Psychological Sciences, Northern Arizona UniversityFlagstaff, AZ, USA
| | | |
Collapse
|
36
|
Abstract
Children seem able to efficiently interpret a variety of linguistic cues during speech comprehension, yet have difficulty interpreting sources of nonlinguistic and paralinguistic information that accompany speech. The current study asked whether (paralinguistic) voice-activated role knowledge is rapidly interpreted in coordination with a linguistic cue (a sentential action) during speech comprehension in an eye-tracked sentence comprehension task with children (ages 3-10 years) and college-aged adults. Participants were initially familiarized with 2 talkers who identified their respective roles (e.g., PRINCESS and PIRATE) before hearing a previously introduced talker name an action and object ("I want to hold the sword," in the pirate's voice). As the sentence was spoken, eye movements were recorded to 4 objects that varied in relationship to the sentential talker and action (target: SWORD, talker-related: SHIP, action-related: WAND, and unrelated: CARRIAGE). The task was to select the named image. Even young child listeners rapidly combined inferences about talker identity with the action, allowing them to fixate on the target before it was mentioned, although there were developmental and vocabulary differences on this task. Results suggest that children, like adults, store real-world knowledge of a talker's role and actively use this information to interpret speech.
Collapse
|
37
|
Norbury CF. Sources of variation in developmental language disorders: evidence from eye-tracking studies of sentence production. Philos Trans R Soc Lond B Biol Sci 2013; 369:20120393. [PMID: 24324237 PMCID: PMC3866423 DOI: 10.1098/rstb.2012.0393] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Skilled sentence production involves distinct stages of message conceptualization (deciding what to talk about) and message formulation (deciding how to talk about it). Eye-movement paradigms provide a mechanism for observing how speakers accomplish these aspects of production in real time. These methods have recently been applied to children with autism spectrum disorder (ASD) and specific language impairment (LI) in an effort to reveal qualitative differences between groups in sentence production processes. Findings support a multiple-deficit account in which language production is influenced not only by lexical and syntactic constraints, but also by variation in attention control, inhibition and social competence. Thus, children with ASD are especially vulnerable to atypical patterns of visual inspection and verbal utterance. The potential to influence attentional focus and prime appropriate language structures are considered as a mechanism for facilitating language adaptation and learning.
Collapse
|
38
|
Leinenger M, Rayner K. Eye Movements while Reading Biased Homographs: Effects of Prior Encounter and Biasing Context on Reducing the Subordinate Bias Effect. J Cogn Psychol (Hove) 2013; 25:665-681. [PMID: 24073328 PMCID: PMC3780419 DOI: 10.1080/20445911.2013.806513] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Readers experience processing difficulties when reading biased homographs preceded by subordinate-biasing contexts. Attempts to overcome this processing deficit have often failed to reduce the subordinate bias effect (SBE). In the present studies, we examined the processing of biased homographs preceded by single-sentence, subordinate-biasing contexts, and varied whether this preceding context contained a prior instance of the homograph or a control word/phrase. Having previously encountered the homograph earlier in the sentence reduced the SBE for the subsequent encounter, while simply instantiating the subordinate meaning produced processing difficulty. We compared these reductions in reading times to differences in processing time between dominant-biased repeated and non-repeated conditions in order to verify that the reductions observed in the subordinate cases did not simply reflect a general repetition benefit. Our results indicate that a strong, subordinate-biasing context can interact during lexical access to overcome the activation from meaning frequency and reduce the SBE during reading.
Collapse
|
39
|
Abstract
Predicting visual information facilitates efficient processing of visual signals. Higher visual areas can support the processing of incoming visual information by generating predictive models that are fed back to lower visual areas. Functional brain imaging has previously shown that predictions interact with visual input already at the level of the primary visual cortex (V1; Harrison et al., 2007; Alink et al., 2010). Given that fixation changes up to four times a second in natural viewing conditions, cortical predictions are effective in V1 only if they are fed back in time for the processing of the next stimulus and at the corresponding new retinotopic position. Here, we tested whether spatio-temporal predictions are updated before, during, or shortly after an inter-hemifield saccade is executed, and thus, whether the predictive signal is transferred swiftly across hemifields. Using an apparent motion illusion, we induced an internal motion model that is known to produce a spatio-temporal prediction signal along the apparent motion trace in V1 (Muckli et al., 2005; Alink et al., 2010). We presented participants with both visually predictable and unpredictable targets on the apparent motion trace. During the task, participants saccaded across the illusion whilst detecting the target. As found previously, predictable stimuli were detected more frequently than unpredictable stimuli. Furthermore, we found that the detection advantage of predictable targets is detectable as early as 50-100 ms after saccade offset. This result demonstrates the rapid nature of the transfer of a spatio-temporally precise predictive signal across hemifields, in a paradigm previously shown to modulate V1.
Collapse
Affiliation(s)
- Petra Vetter
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow Glasgow, UK
| | | | | |
Collapse
|
40
|
Abstract
The boundary paradigm (Rayner, 1975) was used to examine whether high level information affects preview benefit during Chinese reading. In two experiments, readers read sentences with a 1-character target word while their eye movements were monitored. In Experiment 1, the semantic relatedness between the target word and the preview word was manipulated so that there were semantically related and unrelated preview words, both of which were not plausible in the sentence context. No significant differences between these two preview conditions were found, indicating no effect of semantic preview. In Experiment 2, we further examined semantic preview effects with plausible preview words. There were four types of previews: identical, related & plausible, unrelated & plausible, and unrelated & implausible. The results revealed a significant effect of plausibility as single fixation and gaze duration on the target region were shorter in the two plausible conditions than in the implausible condition. Moreover, there was some evidence for a semantic preview benefit as single fixation duration on the target region was shorter in the related & plausible condition than the unrelated & plausible condition. Implications of these results for processing of high level information during Chinese reading are discussed.
Collapse
Affiliation(s)
- Jinmian Yang
- Department of Psychology, University of California, San Diego, San Diego, CA 92092 USA
| | - Suiping Wang
- Department of Psychology, South China Normal University, Guangzhou, China
| | - Xiuhong Tong
- Department of Psychology, South China Normal University, Guangzhou, China
| | - Keith Rayner
- Department of Psychology, University of California, San Diego, San Diego, CA 92092 USA
| |
Collapse
|
41
|
Frey HP, Wirz K, Willenbockel V, Betz T, Schreiber C, Troscianko T, König P. Beyond correlation: do color features influence attention in rainforest? Front Hum Neurosci 2011; 5:36. [PMID: 21519395 PMCID: PMC3079176 DOI: 10.3389/fnhum.2011.00036] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2010] [Accepted: 03/24/2011] [Indexed: 11/22/2022] Open
Abstract
Recent research indicates a direct relationship between low-level color features and visual attention under natural conditions. However, the design of these studies allows only correlational observations and no inference about mechanisms. Here we go a step further to examine the nature of the influence of color features on overt attention in an environment in which trichromatic color vision is advantageous. We recorded eye-movements of color-normal and deuteranope human participants freely viewing original and modified rainforest images. Eliminating red–green color information dramatically alters fixation behavior in color-normal participants. Changes in feature correlations and variability over subjects and conditions provide evidence for a causal effect of red–green color-contrast. The effects of blue–yellow contrast are much smaller. However, globally rotating hue in color space in these images reveals a mechanism analyzing color-contrast invariant of a specific axis in color space. Surprisingly, in deuteranope participants we find significantly elevated red–green contrast at fixation points, comparable to color-normal participants. Temporal analysis indicates that this is due to compensatory mechanisms acting on a slower time scale. Taken together, our results suggest that under natural conditions red–green color information contributes to overt attention at a low-level (bottom-up). Nevertheless, the results of the image modifications and deuteranope participants indicate that evaluation of color information is done in a hue-invariant fashion.
Collapse
Affiliation(s)
- Hans-Peter Frey
- Department of Neurobiopsychology, Institute of Cognitive Science, University of Osnabrück Osnabrück, Germany
| | | | | | | | | | | | | |
Collapse
|