首页> 外文学位 >Usability problem description and the evaluator effect in usability testing.
【24h】

Usability problem description and the evaluator effect in usability testing.

机译:可用性问题描述和可用性测试中的评估者效果。

获取原文
获取原文并翻译 | 示例

摘要

Previous usability evaluation method (UEM) comparison studies have noted an evaluator effect on problem detection in heuristic evaluation, with evaluators differing in problems found and problem severity judgments. There have been few studies of the evaluator effect in usability testing (UT), task-based testing with end-users. UEM comparison studies focus on counting usability problems detected, but we also need to assess the content of usability problem descriptions (UPDs) to more fully measure evaluation effectiveness. The goals of this research were to develop UPD guidelines, explore the evaluator effect in UT, and evaluate the usefulness of the guidelines for grading UPD content.; Ten guidelines for writing UPDs were developed by consulting usability practitioners through two questionnaires and a card sort. These guidelines are (briefly): be clear and avoid jargon, describe problem severity, provide backing data, describe problem causes, describe user actions, provide a solution, consider politics and diplomacy, be professional and scientific, describe your methodology, and help the reader sympathize with the user. A fourth study compared usability reports collected from 44 evaluators, both practitioners and graduate students, watching the same 10-minute UT session recording. Three judges measured problem detection for each evaluator and graded the reports for following 6 of the UPD guidelines.; There was support for existence of an evaluator effect, even when watching prerecorded sessions, with low to moderate individual thoroughness of problem detection across all/severe problems (22%/34%), reliability of problem detection (37%/50%) and reliability of severity judgments (57% for severe ratings). Practitioners received higher grades averaged across the 6 guidelines than students did, suggesting that the guidelines may be useful for grading reports. The grades for the guidelines were not correlated with thoroughness, suggesting that the guideline grades complement measures of problem detection.; A simulation of evaluators working in groups found a 34% increase in severe problems found by adding a second evaluator. The simulation also found that thoroughness of individual evaluators would have been overestimated if the study had included a small number of evaluators. The final recommendations are to use multiple evaluators in UT, and to assess both problem detection and description when measuring evaluation effectiveness.
机译:以前的可用性评估方法(UEM)比较研究已注意到评估者在启发式评估中对问题检测的影响,评估者在发现问题和问题严重性判断方面存在差异。在可用性测试(UT),基于最终用户的基于任务的测试中,评估者效果的研究很少。 UEM比较研究侧重于对检测到的可用性问题进行计数,但我们还需要评估可用性问题描述(UPD)的内容,以更全面地评估评估有效性。这项研究的目的是制定UPD指南,探索UT中的评估者效果,并评估该指南对UPD含量分级的有用性。通过咨询可用性从业人员通过两份调查表和一类卡片,制定了十篇编写UPD的指南。这些准则是(简短地):明确并避免行话,描述问题的严重性,提供支持数据,描述问题的原因,描述用户的行为,提供解决方案,考虑政治和外交,专业和科学,描述您的方法并帮助他们读者同情用户。第四项研究比较了从44位评估人员(从业人员和研究生)收集的可用性报告,观看了同样的10分钟UT会话记录。三名法官对每位评估员进行问题检测,并对报告中的以下6条UPD指南进行评分。即使在观看预先录制的会话时,也存在评估者效果的支持,对所有/严重问题(22%/ 34%),问题检测的可靠性(37%/ 50%)和个人发现问题的准确性不高严重性判断的可靠性(严重等级为57%)。从业人员在6条指南中获得的平均分数均比在学生中更高,这表明该指南可能对评分报告有用。指导方针的等级与彻底性无关,表明指导方针等级是对问题发现的补充。对小组工作的评估人员进行的模拟发现,通过添加第二个评估人员,发现的严重问题增加了34%。该模拟还发现,如果研究中包括少量评估者,则单个评估者的彻底性将被高估。最终建议是在UT中使用多个评估程序,并在评估评估效果时评估问题检测和描述。

著录项

  • 作者

    Capra, Miranda G.;

  • 作者单位

    Virginia Polytechnic Institute and State University.;

  • 授予单位 Virginia Polytechnic Institute and State University.;
  • 学科 Engineering Industrial.; Computer Science.
  • 学位 Ph.D.
  • 年度 2006
  • 页码 292 p.
  • 总页数 292
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 一般工业技术;自动化技术、计算机技术;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号