首页> 外文会议>International Florida Aritificial Intelligence Research Society Conference >Writing Quality, Knowledge, and Comprehension Correlates of Human and Automated Essay Scoring
【24h】

Writing Quality, Knowledge, and Comprehension Correlates of Human and Automated Essay Scoring

机译:写作质量,知识和理解人类和自动化论文评分的相关性

获取原文

摘要

Automated essay scoring tools are often criticized on the basis of construct validity. Specifically, it has been argued that computational scoring algorithms may be unaligned to higher-level indicators of quality writing, such as writers' demonstrated knowledge and understanding of the essay topics. In this paper, we consider how and whether the scoring algorithms within an intelligent writing tutor correlate with measures of writing proficiency and students' general knowledge, reading comprehension, and vocabulary skill. Results indicate that the computational algorithms, although less attuned to knowledge and comprehension factors than human raters, were marginally related to such variables. Implications for improving automated scoring and intelligent tutoring of writing are briefly discussed.
机译:自动论文评分工具通常在构建有效性的基础上批评。具体而言,已经认为计算评分算法可能是未对准的质量写作的更高级别指标,例如作家的知识和对论文主题的理解。在本文中,我们考虑了智能写作导师中得分算法的评分算法与书写熟练度和学生的一般知识,阅读理解和词汇技能相关联。结果表明,计算算法虽然少于人类评估者的知识和理解因素,但与这种变量略微相关。简要讨论了改善自动评分和写作智能辅导的影响。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号