首页> 外文期刊>Human-Machine Systems, IEEE Transactions on >Models of Trust in Human Control of Swarms With Varied Levels of Autonomy
【24h】

Models of Trust in Human Control of Swarms With Varied Levels of Autonomy

机译:群体群体的信任模型,具有各种自主水平

获取原文
获取原文并翻译 | 示例
           

摘要

In this paper, we study human trust and its computational models in supervisory control of swarm robots with varied levels of autonomy (LOA) in a target foraging task. We implement three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. While the swarm in the MI LOA is controlled by a human operator and an autonomous search algorithm collaboratively, the swarms in the manual and autonomous LOAs are fully directed by the human and the search algorithm, respectively. From user studies, we find that humans tend to make their decisions based on physical characteristics of the swarm rather than its performance since the task performance of swarms is not clearly perceivable by humans. Based on the analysis, we formulate trust as a Markov decision process whose state space includes the factors affecting trust. We develop variations of the trust model for different LOAs. We employ an inverse reinforcement learning algorithm to learn behaviors of the operator from demonstrations where the learned behaviors are used to predict human trust. Compared to an existing model, our models reduce the prediction error by at most 39.6%, 36.5%, and 28.8% in the manual, MI, and auto-LOA, respectively.
机译:在本文中,我们在目标觅食任务中具有各种自治(LOA)的群机器人监督控制中的人类信任及其计算模型。我们实施三个LOAS:手动,混合倡议(MI)和完全自主LOA。虽然MI LOA中的群体由人类运营商和自主搜索算法协同控制,但是手动和自主LOA中的群体分别由人和搜索算法完全指导。从用户学习中,我们发现人类倾向于根据群体的身体特征而不是其性能来实现他们的决定,因为群体的任务表现并不明确地被人类感知。根据分析,我们向马尔可夫决策过程制定信任,其国家空间包括影响信任的因素。我们开发不同LOAS信任模型的变体。我们采用逆钢筋学习算法来从学习行为用于预测人类信任的演示中学习操作员的行为。与现有型号相比,我们的模型分别将预测误差降低至28.6%,36.5%和28.8%,分别在手动,MI和自动LOA中。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号