首页> 外文会议>International Conference on User Modeling, Adaptation, and Personalization >Comparing and Combining Eye Gaze and Interface Actions for Determining User Learning with an Interactive Simulation
【24h】

Comparing and Combining Eye Gaze and Interface Actions for Determining User Learning with an Interactive Simulation

机译:与交互式模拟确定用户学习的眼光和接口动作比较和结合

获取原文

摘要

This paper presents an experimental evaluation of eye gaze data as a source for modeling user's learning in Interactive Simulations (IS). We compare the performance of classifier user models trained only on gaze data vs. models trained only on interface actions vs. models trained on the combination of these two sources of user interaction data. Our long-term goal is to build user models that can trigger adaptive support for students who do not learn well with ISs, caused by the often unstructured and open-ended nature of these environments. The test-bed for our work is the CSP applet, an IS for Constraint Satisfaction Problems (CSP). Our findings show that including gaze data as an additional source of information to the CSP applet's user model significantly improves model accuracy compared to using interface actions or gaze data alone.
机译:本文提出了眼睛凝视数据作为用于建模用户学习的互动模拟(是)的源的实验评估。我们比较仅在凝视数据上培训的分类器用户模型的性能与在接口动作上培训的模型与模型培训的模型对这些两个用户交互数据的组合训练。我们的长期目标是构建用户模型,可以触发对不与ISS学习的学生的自适应支持,由这些环境的经常非结构化和开放的性质引起的。用于我们工作的测试床是CSP Applet,a是用于约束满足问题(CSP)。我们的研究结果表明,与使用界面动作或仅凝视数据相比,包括凝视数据作为CSP Applet用户模型的额外信息来源显着提高了模型精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号