首页> 外文会议>International conference on artificial general intelligence >What People Say? Web-Based Casuistry for Artificial Morality Experiments
【24h】

What People Say? Web-Based Casuistry for Artificial Morality Experiments

机译:人们怎么说?基于网络的人工道德实验

获取原文

摘要

It can be said that none of yet proposed methods for achieving artificial ethical reasoning is realistic, i.e. working outside very limited environments and scenarios. Whichever method one chooses, it will not work in various real world situations because it would be very cost-inefficient to provide ethical knowledge for every possible situation. We believe that an autonomous moral agent should utilize existing resources to make a decision or leave it to humans. Inverse reinforcement learning has gathered interest as a possible solution to acquiring knowledge of human values. However, there are two basic difficulties with using a human expert as the source of exemplary behavior. First derives from the fact that it is rather questionable if one person or a few people (even qualified ethicists) can be trusted as safe role models. We propose an approach which requires referring the maximal number of (currently available) possible similar situations to be analyzed, and a majority decision-based "common sense" model is used. The second problem lies in human beings' difficulties with living up to their words, surrendering to primed urges and cognitive biases, and in consequence, breaking moral rules. Our proposed solution is to use not behaviors but humans' declared reactions to acts of others in order to help a machine determine what is positive and what is negative feedback. In this paper we discuss how the third person's opinion could be utilized via means of machine reading and affect recognition to model a safe moral agent and discuss how universal values might be discovered. We also present a simple web-mining system that achieved 85% agreement in moral judgement with human subjects.
机译:可以说,尚无提议的用于实现人工伦理推理的方法是现实的,即在非常有限的环境和场景下工作。无论选择哪种方法,它都不会在现实世界的各种情况下起作用,因为为每种可能的情况提供道德知识的成本非常低廉。我们认为,自主的道德主体应该利用现有资源来做出决定或将决定留给人类。逆向强化学习引起了人们的兴趣,将其作为获取人类价值知识的一种可能的解决方案。然而,使用人类专家作为示范行为的来源有两个基本困难。首先源于这样一个事实,即一个人或几个人(甚至合格的伦理学家)是否可以被视为安全的榜样是相当可疑的。我们提出了一种方法,该方法需要参考(当前可用的)可能的相似情况的最大数量进行分析,并使用基于多数决策的“常识”模型。第二个问题在于人的言行举止,屈服于冲动和认知偏见的困难,结果是违反道德准则。我们提出的解决方案是不使用行为,而是使用人类对他人行为的声明反应,以帮助机器确定什么是积极的反馈和什么是消极的反馈。在本文中,我们讨论了如何通过机器阅读来利用第三者的意见并影响人们对安全道德主体进行建模的认识,并讨论了如何发现普遍价值。我们还提出了一个简单的网络采矿系统,该系统在道德判断方面与人类受试者达成了85%的协议。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号