1
|
Jing M, Kadooka K, Franchak J, Kirkorian HL. The effect of narrative coherence and visual salience on children's and adults' gaze while watching video. J Exp Child Psychol 2023; 226:105562. [PMID: 36257254 DOI: 10.1016/j.jecp.2022.105562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 09/12/2022] [Accepted: 09/14/2022] [Indexed: 11/05/2022]
Abstract
Low-level visual features (e.g., motion, contrast) predict eye gaze during video viewing. The current study investigated the effect of narrative coherence on the extent to which low-level visual salience predicts eye gaze. Eye movements were recorded as 4-year-olds (n = 20) and adults (n = 20) watched a cohesive versus random sequence of video shots from a 4.5-min full vignette from Sesame Street. Overall, visual salience was a stronger predictor of gaze in adults than in children, especially when viewing a random shot sequence. The impact of narrative coherence on children's gaze was limited to the short period of time surrounding cuts to new video shots. The discussion considers potential direct effects of visual salience as well as incidental effects due to overlap between salient features and semantic content. The findings are also discussed in the context of developing video comprehension.
Collapse
Affiliation(s)
- Mengguo Jing
- Department of Human Development and Family Studies, University of Wisconsin-Madison, Madison, WI 53705, USA.
| | - Kellan Kadooka
- Department of Psychology, University of California, Riverside, Riverside, CA 92521, USA
| | - John Franchak
- Department of Psychology, University of California, Riverside, Riverside, CA 92521, USA
| | - Heather L Kirkorian
- Department of Human Development and Family Studies, University of Wisconsin-Madison, Madison, WI 53705, USA
| |
Collapse
|
2
|
Hutson JP, Chandran P, Magliano JP, Smith TJ, Loschky LC. Narrative Comprehension Guides Eye Movements in the Absence of Motion. Cogn Sci 2022; 46:e13131. [PMID: 35579883 DOI: 10.1111/cogs.13131] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 02/17/2022] [Accepted: 02/19/2022] [Indexed: 11/30/2022]
Abstract
Viewers' attentional selection while looking at scenes is affected by both top-down and bottom-up factors. However, when watching film, viewers typically attend to the movie similarly irrespective of top-down factors-a phenomenon we call the tyranny of film. A key difference between still pictures and film is that film contains motion, which is a strong attractor of attention and highly predictive of gaze during film viewing. The goal of the present study was to test if the tyranny of film is driven by motion. To do this, we created a slideshow presentation of the opening scene of Touch of Evil. Context condition participants watched the full slideshow. No-context condition participants did not see the opening portion of the scene, which showed someone placing a time bomb into the trunk of a car. In prior research, we showed that despite producing very different understandings of the clip, this manipulation did not affect viewers' attention (i.e., the tyranny of film), as both context and no-context participants were equally likely to fixate on the car with the bomb when the scene was presented as a film. The current study found that when the scene was shown as a slideshow, the context manipulation produced differences in attentional selection (i.e., it attenuated attentional synchrony). We discuss these results in the context of the Scene Perception and Event Comprehension Theory, which specifies the relationship between event comprehension and attentional selection in the context of visual narratives.
Collapse
Affiliation(s)
- John P Hutson
- Department of Learning Sciences, Georgia State University
| | | | | | - Tim J Smith
- Department of Psychological Sciences, Birkbeck, University of London
| | | |
Collapse
|
3
|
Levin DT, Salas JA, Wright AM, Seiffert AE, Carter KE, Little JW. The Incomplete Tyranny of Dynamic Stimuli: Gaze Similarity Predicts Response Similarity in Screen-Captured Instructional Videos. Cogn Sci 2021; 45:e12984. [PMID: 34170026 DOI: 10.1111/cogs.12984] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 01/19/2021] [Accepted: 04/16/2021] [Indexed: 11/27/2022]
Abstract
Although eye tracking has been used extensively to assess cognitions for static stimuli, recent research suggests that the link between gaze and cognition may be more tenuous for dynamic stimuli such as videos. Part of the difficulty in convincingly linking gaze with cognition is that in dynamic stimuli, gaze position is strongly influenced by exogenous cues such as object motion. However, tests of the gaze-cognition link in dynamic stimuli have been done on only a limited range of stimuli often characterized by highly organized motion. Also, analyses of cognitive contrasts between participants have been mostly been limited to categorical contrasts among small numbers of participants that may have limited the power to observe more subtle influences. We, therefore, tested for cognitive influences on gaze for screen-captured instructional videos, the contents of which participants were tested on. Between-participant scanpath similarity predicted between-participant similarity in responses on test questions, but with imperfect consistency across videos. We also observed that basic gaze parameters and measures of attention to centers of interest only inconsistently predicted learning, and that correlations between gaze and centers of interest defined by other-participant gaze and cursor movement did not predict learning. It, therefore, appears that the search for eye movement indices of cognition during dynamic naturalistic stimuli may be fruitful, but we also agree that the tyranny of dynamic stimuli is real, and that links between eye movements and cognition are highly dependent on task and stimulus properties.
Collapse
Affiliation(s)
- Daniel T Levin
- Department of Psychology and Human Development, Vanderbilt University
| | - Jorge A Salas
- Department of Psychology and Human Development, Vanderbilt University
| | - Anna M Wright
- Department of Psychology and Human Development, Vanderbilt University
| | | | - Kelly E Carter
- Department of Psychology and Human Development, Vanderbilt University
| | - Joshua W Little
- Department of Psychology and Human Development, Vanderbilt University
| |
Collapse
|
4
|
Drivers use active gaze to monitor waypoints during automated driving. Sci Rep 2021; 11:263. [PMID: 33420150 PMCID: PMC7794576 DOI: 10.1038/s41598-020-80126-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 12/14/2020] [Indexed: 11/08/2022] Open
Abstract
Automated vehicles (AVs) will change the role of the driver, from actively controlling the vehicle to primarily monitoring it. Removing the driver from the control loop could fundamentally change the way that drivers sample visual information from the scene, and in particular, alter the gaze patterns generated when under AV control. To better understand how automation affects gaze patterns this experiment used tightly controlled experimental conditions with a series of transitions from 'Manual' control to 'Automated' vehicle control. Automated trials were produced using either a 'Replay' of the driver's own steering trajectories or standard 'Stock' trials that were identical for all participants. Gaze patterns produced during Manual and Automated conditions were recorded and compared. Overall the gaze patterns across conditions were very similar, but detailed analysis shows that drivers looked slightly further ahead (increased gaze time headway) during Automation with only small differences between Stock and Replay trials. A novel mixture modelling method decomposed gaze patterns into two distinct categories and revealed that the gaze time headway increased during Automation. Further analyses revealed that while there was a general shift to look further ahead (and fixate the bend entry earlier) when under automated vehicle control, similar waypoint-tracking gaze patterns were produced during Manual driving and Automation. The consistency of gaze patterns across driving modes suggests that active-gaze models (developed for manual driving) might be useful for monitoring driver engagement during Automated driving, with deviations in gaze behaviour from what would be expected during manual control potentially indicating that a driver is not closely monitoring the automated system.
Collapse
|
5
|
Nakano T, Miyazaki Y. Blink synchronization is an indicator of interest while viewing videos. Int J Psychophysiol 2018; 135:1-11. [PMID: 30428333 DOI: 10.1016/j.ijpsycho.2018.10.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Revised: 10/22/2018] [Accepted: 10/25/2018] [Indexed: 10/27/2022]
Abstract
The temporal pattern of spontaneous blinks changes greatly depending on an individual's internal cognitive state. For instance, when several individuals watch the same video, blinks can be synchronized at attentional breakpoints. The present study examined the degree of this blink synchronization, as reflecting an interest level, while viewing various video clips. In the first experiment, participants interested in soccer, shogi (Japanese chess), or a specific musical group watched a video clip related to each category and rated their interest level after viewing. Results revealed that blink synchronization increased with a rise in interest level in the video clips of soccer and shogi. Moreover, while blink synchronization increased when viewing preferred video clips for the soccer and music group fans, synchronization decreased when viewing videos from the other categories, except for the shogi fans. In contrast, the blink rates did not correlate with the interest level on the video content but changed with the number of shot transitions of it. In the second experiment, participants viewed a video in which a professional salesperson gave descriptions of several products for a few minutes each. When participants reported an interest in the product, blinks were synchronized to the salesperson's blinks. However, when feeling uninterested, blink synchronization did not occur. These results suggest that blink synchronization could be used as an involuntary index to assess a person's interest.
Collapse
Affiliation(s)
- Tamami Nakano
- Graduate School of Frontiers Bioscience, Osaka University, Osaka, Japan; Graduate School of Medicine, Osaka University, Osaka, Japan; JST PRESTO, Japan.
| | - Yuta Miyazaki
- Graduate School of Information, Osaka University, Osaka, Japan
| |
Collapse
|
6
|
Li J, Oksama L, Hyönä J. Close coupling between eye movements and serial attentional refreshing during multiple-identity tracking. JOURNAL OF COGNITIVE PSYCHOLOGY 2018. [DOI: 10.1080/20445911.2018.1476517] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Affiliation(s)
- Jie Li
- School of Psychology, Beijing Sport University, Beijing, People’s Republic of China
| | - Lauri Oksama
- Headquarters, National Defence University, Helsinki, Finland
| | - Jukka Hyönä
- Department of Psychology, University of Turku, Turku, Finland
| |
Collapse
|
7
|
Meyerhoff HS, Schwan S, Huff M. Oculomotion mediates attentional guidance toward temporarily close objects. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1399950] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
| | | | - Markus Huff
- Department of Psychology, University of Tübingen, Tübingen, Germany
- Department of Research Infrastructures, German Research Institute for Adult Education, Bonn, Germany
| |
Collapse
|
8
|
Hutson JP, Smith TJ, Magliano JP, Loschky LC. What is the role of the film viewer? The effects of narrative comprehension and viewing task on gaze control in film. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2017; 2:46. [PMID: 29214207 PMCID: PMC5698392 DOI: 10.1186/s41235-017-0080-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Accepted: 10/04/2017] [Indexed: 11/23/2022]
Abstract
Film is ubiquitous, but the processes that guide viewers’ attention while viewing film narratives are poorly understood. In fact, many film theorists and practitioners disagree on whether the film stimulus (bottom-up) or the viewer (top-down) is more important in determining how we watch movies. Reading research has shown a strong connection between eye movements and comprehension, and scene perception studies have shown strong effects of viewing tasks on eye movements, but such idiosyncratic top-down control of gaze in film would be anathema to the universal control mainstream filmmakers typically aim for. Thus, in two experiments we tested whether the eye movements and comprehension relationship similarly held in a classic film example, the famous opening scene of Orson Welles’ Touch of Evil (Welles & Zugsmith, Touch of Evil, 1958). Comprehension differences were compared with more volitionally controlled task-based effects on eye movements. To investigate the effects of comprehension on eye movements during film viewing, we manipulated viewers’ comprehension by starting participants at different points in a film, and then tracked their eyes. Overall, the manipulation created large differences in comprehension, but only produced modest differences in eye movements. To amplify top-down effects on eye movements, a task manipulation was designed to prioritize peripheral scene features: a map task. This task manipulation created large differences in eye movements when compared to participants freely viewing the clip for comprehension. Thus, to allow for strong, volitional top-down control of eye movements in film, task manipulations need to make features that are important to narrative comprehension irrelevant to the viewing task. The evidence provided by this experimental case study suggests that filmmakers’ belief in their ability to create systematic gaze behavior across viewers is confirmed, but that this does not indicate universally similar comprehension of the film narrative.
Collapse
Affiliation(s)
- John P Hutson
- Department of Psychological Sciences, Kansas State University, 492 Bluemont Hall, 1100 Mid-campus Dr, Manhattan, KS 66506 USA
| | - Tim J Smith
- Department of Psychological Sciences, Birkbeck, University of London, Malet St, London, WC1E 7HX UK
| | - Joseph P Magliano
- Department of Psychology, Northern Illinois University, 361 Psychology-Computer Science Building, DeKalb, IL 60115 USA
| | - Lester C Loschky
- Department of Psychological Sciences, Kansas State University, 492 Bluemont Hall, 1100 Mid-campus Dr, Manhattan, KS 66506 USA
| |
Collapse
|
9
|
Loschky LC, Larson AM, Magliano JP, Smith TJ. What Would Jaws Do? The Tyranny of Film and the Relationship between Gaze and Higher-Level Narrative Film Comprehension. PLoS One 2015; 10:e0142474. [PMID: 26606606 PMCID: PMC4659561 DOI: 10.1371/journal.pone.0142474] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2014] [Accepted: 10/22/2015] [Indexed: 11/18/2022] Open
Abstract
What is the relationship between film viewers' eye movements and their film comprehension? Typical Hollywood movies induce strong attentional synchrony-most viewers look at the same things at the same time. Thus, we asked whether film viewers' eye movements would differ based on their understanding-the mental model hypothesis-or whether any such differences would be overwhelmed by viewers' attentional synchrony-the tyranny of film hypothesis. To investigate this question, we manipulated the presence/absence of prior film context and measured resulting differences in film comprehension and eye movements. Viewers watched a 12-second James Bond movie clip, ending just as a critical predictive inference should be drawn that Bond's nemesis, "Jaws," would fall from the sky onto a circus tent. The No-context condition saw only the 12-second clip, but the Context condition also saw the preceding 2.5 minutes of the movie before seeing the critical 12-second portion. Importantly, the Context condition viewers were more likely to draw the critical inference and were more likely to perceive coherence across the entire 6 shot sequence (as shown by event segmentation), indicating greater comprehension. Viewers' eye movements showed strong attentional synchrony in both conditions as compared to a chance level baseline, but smaller differences between conditions. Specifically, the Context condition viewers showed slightly, but significantly, greater attentional synchrony and lower cognitive load (as shown by fixation probability) during the critical first circus tent shot. Thus, overall, the results were more consistent with the tyranny of film hypothesis than the mental model hypothesis. These results suggest the need for a theory that encompasses processes from the perception to the comprehension of film.
Collapse
Affiliation(s)
- Lester C. Loschky
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, United States of America
- * E-mail:
| | - Adam M. Larson
- Department of Psychology, University of Findlay, Findlay, OH, United States of America
| | - Joseph P. Magliano
- Department of Psychology, Northern Illinois University, DeKalb, IL, United States of America
| | - Tim J. Smith
- Department of Psychology, Birkbeck University of London, London, United Kingdom
| |
Collapse
|
10
|
Trained eyes: experience promotes adaptive gaze control in dynamic and uncertain visual environments. PLoS One 2013; 8:e71371. [PMID: 23951147 PMCID: PMC3741152 DOI: 10.1371/journal.pone.0071371] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2012] [Accepted: 07/03/2013] [Indexed: 11/24/2022] Open
Abstract
Current eye-tracking research suggests that our eyes make anticipatory movements to a location that is relevant for a forthcoming task. Moreover, there is evidence to suggest that with more practice anticipatory gaze control can improve. However, these findings are largely limited to situations where participants are actively engaged in a task. We ask: does experience modulate anticipative gaze control while passively observing a visual scene? To tackle this we tested people with varying degrees of experience of tennis, in order to uncover potential associations between experience and eye movement behaviour while they watched tennis videos. The number, size, and accuracy of saccades (rapid eye-movements) made around ‘events,’ which is critical for the scene context (i.e. hit and bounce) were analysed. Overall, we found that experience improved anticipatory eye-movements while watching tennis clips. In general, those with extensive experience showed greater accuracy of saccades to upcoming event locations; this was particularly prevalent for events in the scene that carried high uncertainty (i.e. ball bounces). The results indicate that, even when passively observing, our gaze control system utilizes prior relevant knowledge in order to anticipate upcoming uncertain event locations.
Collapse
|
11
|
The hippocampus plays a role in the recognition of visual scenes presented at behaviorally relevant points in time: evidence from amnestic mild cognitive impairment (aMCI) and healthy controls. Cortex 2012; 49:1892-900. [PMID: 23266013 DOI: 10.1016/j.cortex.2012.11.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2012] [Revised: 06/22/2012] [Accepted: 11/01/2012] [Indexed: 11/23/2022]
Abstract
When people perform an attentionally demanding target task at fixation, they also encode the surrounding visual environment, which serves as a context of the task. Here, we examined the role of the hippocampus in memory for target and context. Thirty-five patients with amnestic mild cognitive impairment (aMCI) and 35 healthy controls matched for age, gender, and education participated in the study. Participants completed visual letter detection and auditory tone discrimination target tasks, while also viewing a series of briefly presented urban and natural scenes. For the measurement of hippocampal and cerebral cortical volume, we utilized the FreeSurfer protocol using a Siemens Trio 3 T scanner. Before the quantification of brain volumes, hippocampal atrophy was confirmed by visual inspection in each patient. Results revealed intact letter recall and tone discrimination performances in aMCI patients, whereas they showed severe impairments in the recognition of scenes presented together with the targets. Patients with aMCI showed bilaterally reduced hippocampal volumes, but intact cortical volume, as compared with the controls. In controls and in the whole sample, hippocampal volume was positively associated with scene recognition when a target task was present. This relationship was observed in both visual and auditory conditions. Scene recognition and target tasks were not associated with executive functions. These results suggest that the hippocampus plays an essential role in the formation of memory traces of the visual environment when people concurrently perform a target task at behaviorally relevant points in time.
Collapse
|