1
|
Orji FA, Vassileva J. Automatic modeling of student characteristics with interaction and physiological data using machine learning: A review. Front Artif Intell 2022; 5:1015660. [DOI: 10.3389/frai.2022.1015660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 09/20/2022] [Indexed: 11/06/2022] Open
Abstract
Student characteristics affect their willingness and ability to acquire new knowledge. Assessing and identifying the effects of student characteristics is important for online educational systems. Machine learning (ML) is becoming significant in utilizing learning data for student modeling, decision support systems, adaptive systems, and evaluation systems. The growing need for dynamic assessment of student characteristics in online educational systems has led to application of machine learning methods in modeling the characteristics. Being able to automatically model student characteristics during learning processes is essential for dynamic and continuous adaptation of teaching and learning to each student's needs. This paper provides a review of 8 years (from 2015 to 2022) of literature on the application of machine learning methods for automatic modeling of various student characteristics. The review found six student characteristics that can be modeled automatically and highlighted the data types, collection methods, and machine learning techniques used to model them. Researchers, educators, and online educational systems designers will benefit from this study as it could be used as a guide for decision-making when creating student models for adaptive educational systems. Such systems can detect students' needs during the learning process and adapt the learning interventions based on the detected needs. Moreover, the study revealed the progress made in the application of machine learning for automatic modeling of student characteristics and suggested new future research directions for the field. Therefore, machine learning researchers could benefit from this study as they can further advance this area by investigating new, unexplored techniques and find new ways to improve the accuracy of the created student models.
Collapse
|
2
|
Shin S, Chung S, Hong S, Elmqvist N. A Scanner Deeply: Predicting Gaze Heatmaps On Visualizations Using Crowdsourced Eye Movement Data. IEEE Trans Vis Comput Graph 2022; PP:1-11. [PMID: 36166520 DOI: 10.1109/tvcg.2022.3209472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Visual perception is a key component of data visualization. Much prior empirical work uses eye movement as a proxy to understand human visual perception. Diverse apparatus and techniques have been proposed to collect eye movements, but there is still no optimal approach. In this paper, we review 30 prior works for collecting eye movements based on three axes: (1) the tracker technology used to measure eye movements; (2) the image stimulus shown to participants; and (3) the collection methodology used to gather the data. Based on this taxonomy, we employ a webcam-based eyetracking approach using task-specific visualizations as the stimulus. The low technology requirement means that virtually anyone can participate, thus enabling us to collect data at large scale using crowdsourcing: approximately 12,000 samples in total. Choosing visualization images as stimulus means that the eye movements will be specific to perceptual tasks associated with visualization. We use these data to propose a SCANNER DEEPLY, a virtual eyetracker model that, given an image of a visualization, generates a gaze heatmap for that image. We employ a computationally efficient, yet powerful convolutional neural network for our model. We compare the results of our work with results from the DVS model and a neural network trained on the Salicon dataset. The analysis of our gaze patterns enables us to understand how users grasp the structure of visualized data. We also make our stimulus dataset of visualization images available as part of this paper's contribution.
Collapse
|
3
|
Sluis F, Broek EL. Feedback beyond accuracy: Using eye‐tracking to detect comprehensibility and interest during reading. J Assoc Inf Sci Technol 2022; 74:3-16. [PMID: 37056352 PMCID: PMC10084433 DOI: 10.1002/asi.24657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Revised: 03/31/2022] [Accepted: 05/03/2022] [Indexed: 11/08/2022]
Abstract
Knowing what information a user wants is a paramount challenge to information science and technology. Implicit feedback is key to solving this challenge, as it allows information systems to learn about a user's needs and preferences. The available feedback, however, tends to be limited and its interpretation shows to be difficult. To tackle this challenge, we present a user study that explores whether tracking the eyes can unpack part of the complexity inherent to relevance and relevance decisions. The eye behavior of 30 participants reading 18 news articles was compared with their subjectively appraised comprehensibility and interest at a discourse level. Using linear regression models, the eye-tracking signal explained 49.93% (comprehensibility) and 30.41% (interest) of variance (p < .001). We conclude that eye behavior provides implicit feedback beyond accuracy that enables new forms of adaptation and interaction support for personalized information systems.
Collapse
Affiliation(s)
- Frans Sluis
- Department of Communication University of Copenhagen Copenhagen Denmark
| | - Egon L. Broek
- Department of Information and Computing Sciences Utrecht University Utrecht the Netherlands
| |
Collapse
|
4
|
Lalle S, Toker D, Conati C. Gaze-Driven Adaptive Interventions for Magazine-Style Narrative Visualizations. IEEE Trans Vis Comput Graph 2021; 27:2941-2952. [PMID: 31831427 DOI: 10.1109/tvcg.2019.2958540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this article, we investigate the value of gaze-driven adaptive interventions to support the processing of textual documents with embedded visualizations, i.e., Magazine Style Narrative Visualizations (MSNVs). These interventions are provided dynamically by highlighting relevant data points in the visualization when the user reads related sentences in the MSNV text, as detected by an eye-tracker. We conducted a user study during which participants read a set of MSNVs with our interventions, and compared their performance and experience with participants who received no interventions. Our work extends previous findings by showing that dynamic, gaze-driven interventions can be delivered based on reading behaviors in MSNVs, a widespread form of documents that have never been considered for gaze-driven adaptation so far. Next, we found that the interventions significantly improved the performance of users with low levels of visualization literacy, i.e., those users who need help the most due to their lower ability to process and understand data visualizations. However, high literacy users were not impacted by the interventions, providing initial evidence that gaze-driven interventions can be further improved by personalizing them to the levels of visualization literacy of their users.
Collapse
|
5
|
Conati C, Lallé S, Rahman MA, Toker D. Comparing and Combining Interaction Data and Eye-tracking Data for the Real-time Prediction of User Cognitive Abilities in Visualization Tasks. ACM T INTERACT INTEL 2020. [DOI: 10.1145/3301400] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Previous work has shown that some user cognitive abilities relevant for processing information visualizations can be predicted from eye-tracking data. Performing this type of
user modeling
is important for devising visualizations that can detect a user's abilities and adapt accordingly during the interaction. In this article, we extend previous user modeling work by investigating for the first time
interaction data
as an alternative source to predict cognitive abilities during visualization processing when it is not feasible to collect eye-tracking data. We present an extensive comparison of user models based solely on eye-tracking data, on interaction data, as well as on a combination of the two. Although we found that eye-tracking data generate the most accurate predictions, results show that interaction data can still outperform a majority-class baseline, meaning that adaptation for interactive visualizations could be enabled even when it is not feasible to perform eye tracking, using solely interaction data. Furthermore, we found that interaction data can predict several cognitive abilities with better accuracy at the very beginning of the task than eye-tracking data, which are valuable for delivering adaptation early in the task. We also extend previous work by examining the value of multimodal classifiers combining interaction data and eye-tracking data, with promising results for some of our target user cognitive abilities. Next, we contribute to previous work by extending the type of visualizations considered and the set of cognitive abilities that can be predicted from either eye-tracking data and interaction data. Finally, we evaluate how noise in gaze data impacts prediction accuracy and find that retaining rather noisy gaze datapoints can yield equal or even better predictions than discarding them, a novel and important contribution for devising adaptive visualizations in real settings where eye-tracking data are typically noisier than in laboratory settings.
Collapse
Affiliation(s)
| | | | | | - Dereck Toker
- University of British Columbia, Vancouver, Canada
| |
Collapse
|
6
|
Seo J, Laine TH, Sohn KA. An Exploration of Machine Learning Methods for Robust Boredom Classification Using EEG and GSR Data. Sensors (Basel) 2019; 19:E4561. [PMID: 31635194 PMCID: PMC6832442 DOI: 10.3390/s19204561] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 10/15/2019] [Accepted: 10/17/2019] [Indexed: 11/30/2022]
Abstract
In recent years, affective computing has been actively researched to provide a higher level of emotion-awareness. Numerous studies have been conducted to detect the user's emotions from physiological data. Among a myriad of target emotions, boredom, in particular, has been suggested to cause not only medical issues but also challenges in various facets of daily life. However, to the best of our knowledge, no previous studies have used electroencephalography (EEG) and galvanic skin response (GSR) together for boredom classification, although these data have potential features for emotion classification. To investigate the combined effect of these features on boredom classification, we collected EEG and GSR data from 28 participants using off-the-shelf sensors. During data acquisition, we used a set of stimuli comprising a video clip designed to elicit boredom and two other video clips of entertaining content. The collected samples were labeled based on the participants' questionnaire-based testimonies on experienced boredom levels. Using the collected data, we initially trained 30 models with 19 machine learning algorithms and selected the top three candidate classifiers. After tuning the hyperparameters, we validated the final models through 1000 iterations of 10-fold cross validation to increase the robustness of the test results. Our results indicated that a Multilayer Perceptron model performed the best with a mean accuracy of 79.98% (AUC: 0.781). It also revealed the correlation between boredom and the combined features of EEG and GSR. These results can be useful for building accurate affective computing systems and understanding the physiological properties of boredom.
Collapse
Affiliation(s)
- Jungryul Seo
- Department of Computer Engineering, Ajou University, Suwon 16499, Korea.
| | - Teemu H Laine
- Department of Computer Science, Electrical and Space Engineering, The Luleå University of Technology, Skellefteå 93187, Sweden.
| | - Kyung-Ah Sohn
- Department of Computer Engineering, Ajou University, Suwon 16499, Korea.
| |
Collapse
|
7
|
Hayashi Y, Seta K, Ikeda M. Development of a support system for measurement and analysis of thinking processes based on a metacognitive interpretation framework: a case study of dissolution of belief conflict thinking processes. Res Pract Technol Enhanc Learn 2018; 13:21. [PMID: 30595749 PMCID: PMC6297165 DOI: 10.1186/s41039-018-0091-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2018] [Accepted: 11/20/2018] [Indexed: 06/09/2023]
Abstract
The ability for metacognitive thought, or "thinking about thinking," is recognized as an increasingly important skill for the future enrichment of social life. However, this skill is difficult to teach because it involves implicit cognitive activities that cannot be perceived by an outside observer. In this study, we propose an interpretation framework of metacognition as one approach to considering metacognitive thinking processes. This framework serves as the design principles for developing a system that makes it possible to provide metacognitive interpretations of gaze behaviors and thought operation actions and provides a common basis for sharing and comparing knowledge from analysis results. In this study, for an example of framework-based system development, we construct a thinking externalization application and thinking analysis support system with a thinking task of dissolving belief conflict as the theme. We also demonstrate an example of the analysis of thinking about belief conflict, as derived from lower-level and higher-level thinking interpretation rules. From the example results of the defined interpretation rules, we found that the desired behavior occurred, demonstrating the postulated possibility of capturing the thought process. By realizing a series of phases on the framework proposed in this paper, it contributes to the feasibility of grasping the metacognition process and accumulating knowledge about it.
Collapse
Affiliation(s)
- Yuki Hayashi
- Graduate School of Humanities and Sustainable System Sciences, Osaka Prefecture University, 1-1 Gakuen-cho Naka-ku Sakai-shi, Osaka, 599-8531 Japan
| | - Kazuhisa Seta
- Graduate School of Humanities and Sustainable System Sciences, Osaka Prefecture University, 1-1 Gakuen-cho Naka-ku Sakai-shi, Osaka, 599-8531 Japan
| | - Mitsuru Ikeda
- Graduate School of Advanced Science and Technology, Japan Advanced Institute of Science and Technology, Nomi, Japan
| |
Collapse
|
8
|
DeFalco JA, Rowe JP, Paquette L, Georgoulas-Sherry V, Brawner K, Mott BW, Baker RS, Lester JC. Detecting and Addressing Frustration in a Serious Game for Military Training. Int J Artif Intell Educ 2017. [DOI: 10.1007/s40593-017-0152-1] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
9
|
|
10
|
|
11
|
Conati C, Gutica M. Interaction with an Edu-Game: A Detailed Analysis of Student Emotions and Judges’ Perceptions. Int J Artif Intell Educ 2016; 26:975-1010. [DOI: 10.1007/s40593-015-0081-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
12
|
Andreu-perez J, Solnais C, Sriskandarajah K. EALab (Eye Activity Lab): a MATLAB Toolbox for Variable Extraction, Multivariate Analysis and Classification of Eye-Movement Data. Neuroinformatics 2016; 14:51-67. [DOI: 10.1007/s12021-015-9275-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
13
|
Harley JM, Azevedo R. Toward a Feature-Driven Understanding of Students' Emotions during Interactions with Agent-Based Learning Environments. International Journal of Gaming and Computer-Mediated Simulations 2014. [DOI: 10.4018/ijgcms.2014070102] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This selective review synthesizes and draws recommendations from the fields of affective computing, intelligent tutoring systems, and psychology to describe and discuss the emotions that learners report experiencing while interacting with agent-based learning environments (ABLEs). Theoretically driven explanations are provided that describe the relative effectiveness and ineffectiveness of different ABLE features to foster adaptive emotions (e.g., engagement, curiosity) vs. non-adaptive emotions (e.g., frustration, boredom) in six different environments. This review provides an analytical lens to evaluate and improve upon research with ABLEs by identifying specific system features and their relationship with learners' appraisals and emotions.
Collapse
Affiliation(s)
- Jason M. Harley
- Computer Science and Operations, University of Montréal, Montréal, Canada & McGill University, Department of Educational and Counseling Psychology, Montréal, Canada
| | - Roger Azevedo
- Department of Psychology, North Carolina State University, Raleigh, NC, USA
| |
Collapse
|