1
|
Abstract
In this study, we present TURead, an eye movement dataset of silent and oral sentence reading in Turkish, an agglutinative language with a shallow orthography understudied in reading research. TURead provides empirical data to investigate the relationship between morphology and oculomotor control. We employ a target-word approach in which target words are manipulated by word length and by the addition of two commonly used suffixes in Turkish. The dataset contains well-established eye movement variables; prelexical characteristics such as vowel harmony and bigram-trigram frequencies and word features, such as word length, predictability, frequency, eye voice span measures, Cloze test scores of the root word and suffix predictabilities, as well as the scores obtained from two working memory tests. Our findings on fixation parameters and word characteristics are in line with the patterns reported in the relevant literature.
Collapse
Affiliation(s)
- Cengiz Acartürk
- Cognitive Science Department, Jagiellonian University, Kraków, Poland.
- Cognitive Science Department, Middle East Technical University, Çankaya/Ankara, Turkey.
| | - Ayşegül Özkan
- Cognitive Science Department, Jagiellonian University, Kraków, Poland
- Cognitive Science Department, Middle East Technical University, Çankaya/Ankara, Turkey
| | - Tuğçe Nur Pekçetin
- Cognitive Science Department, Middle East Technical University, Çankaya/Ankara, Turkey
| | - Zuhal Ormanoğlu
- Cognitive Science Department, Middle East Technical University, Çankaya/Ankara, Turkey
| | - Bilal Kırkıcı
- Department of Foreign Language Education, Middle East Technical University, Çankaya/Ankara, Turkey
| |
Collapse
|
2
|
Siegelman N, Schroeder S, Acartürk C, Ahn HD, Alexeeva S, Amenta S, Bertram R, Bonandrini R, Brysbaert M, Chernova D, Da Fonseca SM, Dirix N, Duyck W, Fella A, Frost R, Gattei CA, Kalaitzi A, Kwon N, Lõo K, Marelli M, Papadopoulos TC, Protopapas A, Savo S, Shalom DE, Slioussar N, Stein R, Sui L, Taboh A, Tønnesen V, Usal KA, Kuperman V. Expanding horizons of cross-linguistic research on reading: The Multilingual Eye-movement Corpus (MECO). Behav Res Methods 2022; 54:2843-2863. [PMID: 35112286 PMCID: PMC8809631 DOI: 10.3758/s13428-021-01772-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/14/2021] [Indexed: 12/16/2022]
Abstract
Scientific studies of language behavior need to grapple with a large diversity of languages in the world and, for reading, a further variability in writing systems. Yet, the ability to form meaningful theories of reading is contingent on the availability of cross-linguistic behavioral data. This paper offers new insights into aspects of reading behavior that are shared and those that vary systematically across languages through an investigation of eye-tracking data from 13 languages recorded during text reading. We begin with reporting a bibliometric analysis of eye-tracking studies showing that the current empirical base is insufficient for cross-linguistic comparisons. We respond to this empirical lacuna by presenting the Multilingual Eye-Movement Corpus (MECO), the product of an international multi-lab collaboration. We examine which behavioral indices differentiate between reading in written languages, and which measures are stable across languages. One of the findings is that readers of different languages vary considerably in their skipping rate (i.e., the likelihood of not fixating on a word even once) and that this variability is explained by cross-linguistic differences in word length distributions. In contrast, if readers do not skip a word, they tend to spend a similar average time viewing it. We outline the implications of these findings for theories of reading. We also describe prospective uses of the publicly available MECO data, and its further development plans.
Collapse
Affiliation(s)
- Noam Siegelman
- Haskins Laboratories, 300 George Street, Suite #900, New Haven, CT, 06511, USA.
| | | | - Cengiz Acartürk
- Middle East Technical University, Ankara, Turkey
- Jagiellonian University, Krakow, Poland
| | | | | | | | | | | | | | - Daria Chernova
- Saint Petersburg State University, St Petersburg, Russia
| | | | | | | | | | - Ram Frost
- The Hebrew University, Jerusalem, Israel
| | - Carolina A Gattei
- Universidad de Buenos Aires, Buenos Aires, Argentina
- Universidad Torcuato di Tella, Buenos Aires, Argentina
- Pontificia Universidad Católica Argentina, Buenos Aires, Argentina
| | | | | | | | | | | | | | | | - Diego E Shalom
- Universidad de Buenos Aires, Buenos Aires, Argentina
- Universidad Torcuato di Tella, Buenos Aires, Argentina
| | - Natalia Slioussar
- Saint Petersburg State University, St Petersburg, Russia
- Higher School of Economics (HSE) Moscow, Moscow, Russia
| | - Roni Stein
- The Hebrew University, Jerusalem, Israel
| | | | - Analí Taboh
- Universidad de Buenos Aires, Buenos Aires, Argentina
- Universidad Torcuato di Tella, Buenos Aires, Argentina
| | | | | | | |
Collapse
|
3
|
Kaakinen JK, Werlen E, Kammerer Y, Acartürk C, Aparicio X, Baccino T, Ballenghein U, Bergamin P, Castells N, Costa A, Falé I, Mégalakaki O, Ruiz Fernández S. IDEST: International Database of Emotional Short Texts. PLoS One 2022; 17:e0274480. [PMID: 36206273 PMCID: PMC9544016 DOI: 10.1371/journal.pone.0274480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 08/29/2022] [Indexed: 11/18/2022] Open
Abstract
We introduce a database (IDEST) of 250 short stories rated for valence, arousal, and comprehensibility in two languages. The texts, with a narrative structure telling a story in the first person and controlled for length, were originally written in six different languages (Finnish, French, German, Portuguese, Spanish, and Turkish), and rated for arousal, valence, and comprehensibility in the original language. The stories were translated into English, and the same ratings for the English translations were collected via an internet survey tool (N = 573). In addition to the rating data, we also report readability indexes for the original and English texts. The texts have been categorized into different story types based on their emotional arc. The texts score high on comprehensibility and represent a wide range of emotional valence and arousal levels. The comparative analysis of the ratings of the original texts and English translations showed that valence ratings were very similar across languages, whereas correlations between the two pairs of language versions for arousal and comprehensibility were modest. Comprehensibility ratings correlated with only some of the readability indexes. The database is published in osf.io/9tga3, and it is freely available for academic research.
Collapse
Affiliation(s)
- Johanna K. Kaakinen
- Department of Psychology and Speech-Language Pathology, University of Turku, Turku, Finland,INVEST Research Flagship, University of Turku, Turku, Finland,* E-mail:
| | - Egon Werlen
- Institute for Research in Open, Distance and eLearning, Swiss Distance University of Applied Sciences, Brig, Switzerland
| | - Yvonne Kammerer
- Leibniz-Institut für Wissensmedien, Tübingen, Germany,Stuttgart Media University, Stuttgart, Germany
| | - Cengiz Acartürk
- Cognitive Science Department, Jagiellonian University, Kraków, Poland,Cognitive Science Department, Middle East Technical University, Ankara, Turkey
| | | | | | - Ugo Ballenghein
- Université Paris Est Créteil, Bonneuil, France,Université Paris 8, Saint-Denis, France
| | - Per Bergamin
- Institute for Research in Open, Distance and eLearning, Swiss Distance University of Applied Sciences, Brig, Switzerland
| | | | - Armanda Costa
- Center of Linguistics, School of Arts and Humanities, University of Lisbon, Lisbon, Portugal
| | - Isabel Falé
- Center of Linguistics, School of Arts and Humanities, University of Lisbon, Lisbon, Portugal,Universidade Aberta, Lisbon, Portugal
| | - Olga Mégalakaki
- Université de Picardie Jules Verne, Amiens, France,Sigmund Freud University, Paris, France
| | - Susana Ruiz Fernández
- Leibniz-Institut für Wissensmedien, Tübingen, Germany,FOM University of Applied Sciences, Essen, Germany
| |
Collapse
|
4
|
Arslan Aydin Ü, Kalkan S, Acartürk C. Speech Driven Gaze in a Face-to-Face Interaction. Front Neurorobot 2021; 15:598895. [PMID: 33746729 PMCID: PMC7970197 DOI: 10.3389/fnbot.2021.598895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Accepted: 01/25/2021] [Indexed: 11/23/2022] Open
Abstract
Gaze and language are major pillars in multimodal communication. Gaze is a non-verbal mechanism that conveys crucial social signals in face-to-face conversation. However, compared to language, gaze has been less studied as a communication modality. The purpose of the present study is 2-fold: (i) to investigate gaze direction (i.e., aversion and face gaze) and its relation to speech in a face-to-face interaction; and (ii) to propose a computational model for multimodal communication, which predicts gaze direction using high-level speech features. Twenty-eight pairs of participants participated in data collection. The experimental setting was a mock job interview. The eye movements were recorded for both participants. The speech data were annotated by ISO 24617-2 Standard for Dialogue Act Annotation, as well as manual tags based on previous social gaze studies. A comparative analysis was conducted by Convolutional Neural Network (CNN) models that employed specific architectures, namely, VGGNet and ResNet. The results showed that the frequency and the duration of gaze differ significantly depending on the role of participant. Moreover, the ResNet models achieve higher than 70% accuracy in predicting gaze direction.
Collapse
Affiliation(s)
- Ülkü Arslan Aydin
- Cognitive Science Department, Middle East Technical University, Ankara, Turkey
| | - Sinan Kalkan
- Computer Engineering Department, Middle East Technical University, Ankara, Turkey
| | - Cengiz Acartürk
- Cognitive Science Department, Middle East Technical University, Ankara, Turkey
- Cyber Security Department, Middle East Technical University, Ankara, Turkey
| |
Collapse
|
5
|
Abstract
Reading requires the assembly of cognitive processes across a wide spectrum from low-level visual perception to high-level discourse comprehension. One approach of unravelling the dynamics associated with these processes is to determine how eye movements are influenced by the characteristics of the text, in particular which features of the words within the perceptual span maximise the information intake due to foveal, spillover, parafoveal, and predictive processing. One way to test the generalisability of current proposals of such distributed processing is to examine them across different languages. For Turkish, an agglutinative language with a shallow orthography-phonology mapping, we replicate the well-known canonical main effects of frequency and predictability of the fixated word as well as effects of incoming saccade amplitude and fixation location within the word on single-fixation durations with data from 35 adults reading 120 nine-word sentences. Evidence for previously reported effects of the characteristics of neighbouring words and interactions was mixed. There was no evidence for the expected Turkish-specific morphological effect of the number of inflectional suffixes on single-fixation durations. To control for word-selection bias associated with single-fixation durations, we also tested effects on word skipping, single-fixation, and multiple-fixation cases with a base-line category logit model, assuming an increase of difficulty for an increase in the number of fixations. With this model, significant effects of word characteristics and number of inflectional suffixes of foveal word on probabilities of the number of fixations were observed, while the effects of the characteristics of neighbouring words and interactions were mixed.
Collapse
Affiliation(s)
- Ayşegül Özkan
- Cognitive Science Department, Informatics Institute, Orta Doğu Teknik Üniversitesi, Ankara, Turkey
| | - Figen Beken Fikri
- Cognitive Science Department, Informatics Institute, Orta Doğu Teknik Üniversitesi, Ankara, Turkey
| | - Bilal Kırkıcı
- Department of Foreign Language Education, Orta Doğu Teknik Üniversitesi, Ankara, Turkey
| | - Reinhold Kliegl
- Division of Training and Movement Science, University of Potsdam, Potsdam, Germany
| | - Cengiz Acartürk
- Cognitive Science Department, Informatics Institute, Orta Doğu Teknik Üniversitesi, Ankara, Turkey
| |
Collapse
|
6
|
İşbilir E, Çakır MP, Acartürk C, Tekerek AŞ. Towards a Multimodal Model of Cognitive Workload Through Synchronous Optical Brain Imaging and Eye Tracking Measures. Front Hum Neurosci 2019; 13:375. [PMID: 31708760 PMCID: PMC6820355 DOI: 10.3389/fnhum.2019.00375] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Accepted: 10/03/2019] [Indexed: 01/05/2023] Open
Abstract
Recent advances in neuroimaging technologies have rendered multimodal analysis of operators’ cognitive processes in complex task settings and environments increasingly more practical. In this exploratory study, we utilized optical brain imaging and mobile eye tracking technologies to investigate the behavioral and neurophysiological differences among expert and novice operators while they operated a human-machine interface in normal and adverse conditions. In congruence with related work, we observed that experts tended to have lower prefrontal oxygenation and exhibit gaze patterns that are better aligned with the optimal task sequence with shorter fixation durations as compared to novices. These trends reached statistical significance only in the adverse condition where the operators were prompted with an unexpected error message. Comparisons between hemodynamic and gaze measures before and after the error message indicated that experts’ neurophysiological response to the error involved a systematic increase in bilateral dorsolateral prefrontal cortex (dlPFC) activity accompanied with an increase in fixation durations, which suggests a shift in their attentional state, possibly from routine process execution to problem detection and resolution. The novices’ response was not as strong as that of experts, including a slight increase only in the left dlPFC with a decreasing trend in fixation durations, which is indicative of visual search behavior for possible cues to make sense of the unanticipated situation. A linear discriminant analysis model capitalizing on the covariance structure among hemodynamic and eye movement measures could distinguish experts from novices with 91% accuracy. Despite the small sample size, the performance of the linear discriminant analysis combining eye fixation and dorsolateral oxygenation measures before and after an unexpected event suggests that multimodal approaches may be fruitful for distinguishing novice and expert performance in similar neuroergonomic applications in the field.
Collapse
Affiliation(s)
- Erdinç İşbilir
- Advanced Technologies Directorate, Guidance and Photonics Division, Roketsan Missiles Industries Inc., Ankara, Turkey
| | - Murat Perit Çakır
- Department of Cognitive Science, Informatics Institute, Middle East Technical University, Ankara, Turkey
| | - Cengiz Acartürk
- Department of Cognitive Science, Informatics Institute, Middle East Technical University, Ankara, Turkey
| | - Ali Şimşek Tekerek
- Advanced Technologies Directorate, Guidance and Photonics Division, Roketsan Missiles Industries Inc., Ankara, Turkey
| |
Collapse
|
7
|
Semin GR, Palma T, Acartürk C, Dziuba A. Gender is not simply a matter of black and white, or is it? Philos Trans R Soc Lond B Biol Sci 2019; 373:rstb.2017.0126. [PMID: 29914994 PMCID: PMC6015822 DOI: 10.1098/rstb.2017.0126] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/13/2018] [Indexed: 11/12/2022] Open
Abstract
Based on research in physical anthropology, we argue that brightness marks the abstract category of gender, with light colours marking the female gender and dark colours marking the male gender. In a set of three experiments, we examine this hypothesis, first in a speeded gender classification experiment with male and female names presented in black and white. As expected, male names in black and female names in white are classified faster than the reverse gender-colour combinations. The second experiment relies on a gender classification task involving the disambiguation of very briefly appearing non-descript stimuli in the form of black and white ‘blobs’. The former are classified predominantly as male and the latter as female names. Finally, the processes driving light and dark object choices for males and females are examined by tracking the number of fixations and their duration in an eye-tracking experiment. The results reveal that when choosing for a male target, participants look longer and make more fixations on dark objects, and the same for light objects when choosing for a female target. The implications of these findings, which repeatedly reveal the same data patterns across experiments with Dutch, Portuguese and Turkish samples for the abstract category of gender, are discussed. The discussion attempts to enlarge the subject beyond mainstream models of embodied grounding. This article is part of the theme issue ‘Varieties of abstract concepts: development, use and representation in the brain’.
Collapse
Affiliation(s)
- Gün R Semin
- William James Center for Research, ISPA - Instituto Universitário, Rua Jardim do Tabaco 41, 1149-041 Lisboa, Portugal .,Department of Psychology, Utrecht University, 3584 CS Utrecht, The Netherlands
| | - Tomás Palma
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013 Lisboa, Portugal
| | - Cengiz Acartürk
- Informatics Insitute, Department of Cognitive Sciences, Middle East Technical University, 06800 Ankara, Turkey
| | - Aleksandra Dziuba
- William James Center for Research, ISPA - Instituto Universitário, Rua Jardim do Tabaco 41, 1149-041 Lisboa, Portugal
| |
Collapse
|
8
|
Abstract
The analysis of dynamic scenes has been a challenging domain in eye tracking research.
This study presents a framework, named MAGiC, for analyzing gaze contact and gaze aversion
in face-to-face communication. MAGiC provides an environment that is able to detect
and track the conversation partner’s face automatically, overlay gaze data on top of the face
video, and incorporate speech by means of speech-act annotation. Specifically, MAGiC integrates
eye tracking data for gaze, audio data for speech segmentation, and video data for
face tracking. MAGiC is an open source framework and its usage is demonstrated via publicly
available video content and wiki pages. We explored the capabilities of MAGiC
through a pilot study and showed that it facilitates the analysis of dynamic gaze data by
reducing the annotation effort and the time spent for manual analysis of video data.
Collapse
Affiliation(s)
- Ülkü Arslan Aydın
- Cognitive Science Program Middle East Technical University Ankara, Turkey
| | - Sinan Kalkan
- Computer Science Department Middle East Technical University Ankara, Turkey
| | - Cengiz Acartürk
- Cognitive Science Program Middle East Technical University Ankara, Turkey
| |
Collapse
|
9
|
Alaçam Ö, Habel C, Acartürk C. Switching reference frame preferences during verbally assisted haptic graph comprehension. Cogn Process 2015. [PMID: 26224279 DOI: 10.1007/s10339-015-0730-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Haptic-audio interfaces allow haptic exploration of statistical line graphs accompanied by sound or speech, thus providing access to exploration by visually impaired people. Verbally assisted haptic graph exploration can be seen as a task-oriented collaborative activity between two partners, a haptic explorer and an observing assistant, who are disposed to individual preferences for using reference frames. The experimental findings reveal that haptic explorers' spatial reference frames are mostly induced by hand movements, leading to action perspective instead of conventionally left-to-right spatiotemporal perspective. Moreover, the communicational goal may result in a switch in perspective.
Collapse
Affiliation(s)
- Özge Alaçam
- Department of Informatics, University of Hamburg, Vogt-Koelln-Str. 30, 22527, Hamburg, Germany,
| | | | | |
Collapse
|
10
|
|
11
|
Alaşehir O, Çakır MP, Acartürk C, Baykal N, Akbulut U. URAP-TR: a national ranking for Turkish universities based on academic performance. Scientometrics 2014. [DOI: 10.1007/s11192-014-1333-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
12
|
Acartürk C. Towards a systematic understanding of graphical cues in communication through statistical graphs. Journal of Visual Languages & Computing 2014. [DOI: 10.1016/j.jvlc.2013.11.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|