首页> 外文学位 >Analyzing cognitive presence in online courses using an artificial neural network.
【24h】

Analyzing cognitive presence in online courses using an artificial neural network.

机译:使用人工神经网络分析在线课程中的认知状态。

获取原文
获取原文并翻译 | 示例

摘要

This work outlines the theoretical underpinnings, method, results, and implications for constructing a discussion list analysis tool that categorizes online, educational discussion list messages into levels of cognitive effort.;Purpose. The purpose of such a tool is to provide evaluative feedback to instructors who facilitate online learning, to researchers studying computer-supported collaborative learning, and to administrators interested in correlating objective measures of students' cognitive effort with other measures of student success. This work connects computer-supported collaborative learning, content analysis, and artificial intelligence.;Method. Broadly, the method employed is a content analysis in which the data from the analysis is modeled using artificial neural network (ANN) software. A group of human coders categorized online discussion list messages, and inter-rater reliability was calculated among them. That reliability figure serves as a measuring stick for determining how well the ANN categorizes the same messages that the group of human coders categorized. Reliability between the ANN model and the group of human coders is compared to the reliability among the group of human coders to determine how well the ANN performs compared to humans.;Findings. Two experiments were conducted in which artificial neural network (ANN) models were constructed to model the decisions of human coders, and the experiments revealed that the ANN, under noisy, real-life circumstances codes messages with near-human accuracy. From experiment one, the reliability between the ANN model and the group of human coders, using Cohen's kappa, is 0.519 while the human reliability values range from 0.494 to 0.742 (M=0.6). Improvements were made to the human content analysis with the goal of improving the reliability among coders. After these improvements were made, the humans coded messages with a kappa agreement ranging from 0.816 to 0.879 (M=0.848), and the kappa agreement between the ANN model and the group of human coders is 0.70.
机译:这项工作概述了构建讨论列表分析工具的理论基础,方法,结果和含义,该工具将在线,教育性讨论列表消息分类为认知努力的水平。这种工具的目的是向促进在线学习的教师,研究计算机支持的协作学习的研究人员以及对将学生的认知努力的客观指标与学生成功的其他指标相关联感兴趣的管理人员提供评估反馈。这项工作将计算机支持的协作学习,内容分析和人工智能连接起来。广泛地讲,所采用的方法是一种内容分析,其中使用人工神经网络(ANN)软件对来自分析的数据进行建模。一组人类编码员将在线讨论列表消息分类,并在其中评估了评估者之间的可靠性。该可靠性指标用作确定ANN对人类编码人员分类的相同消息进行分类的程度的量尺。将ANN模型与人类编码人员组之间的可靠性与人类编码人员组之间的可靠性进行比较,以确定ANN与人类编码人员相比的性能如何。进行了两个实验,其中构建了人工神经网络(ANN)模型来模拟人类编码人员的决策,并且实验表明,在嘈杂的现实环境下,人工神经网络以接近人类的准确性对消息进行编码。从实验一开始,使用科恩的kappa,人工神经网络模型与人类编码者组之间的可靠性为0.519,而人类可靠性值的范围为0.494至0.742(M = 0.6)。为了提高编码人员之间的可靠性,对人工内容分析进行了改进。进行了这些改进之后,人类使用kappa协议在0.816到0.879(M = 0.848)之间的代码对消息进行编码,而ANN模型与人类编码人员组之间的kappa协议为0.70。

著录项

  • 作者

    McKlin, Thomas E.;

  • 作者单位

    Georgia State University.;

  • 授予单位 Georgia State University.;
  • 学科 Education Technology of.;Artificial Intelligence.
  • 学位 Ph.D.
  • 年度 2004
  • 页码 216 p.
  • 总页数 216
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号