【24h】

Machine Agency, Moral Relevance, and Moral Agency

机译:机器代理,道德相关性和道德机构

获取原文

摘要

Machines, including robots, have always been agents and are becoming increasingly autonomous agents. But autonomy isn't sufficient for moral agency. Machines can be described in intentional terms, using what Dennett calls 'the intentional stance'. This doesn't yet make them fully intentional agents, for as yet we only apply certain aspects or parts of the intentional stance to machines. There's no reason to think that we are (yet) developing genuinely moral machine or robot agents. There's a specific feature of decisions about the most morally weighty issues which means that we won't rightly think of them as having done the moral thinking required. But moral patiency, as we might call it, matters, too. The extent to which artefacts will be credited with moral agency will also be affected by the extent to which we think of them as capable of genuine suffering. Until we are willing to credit machines and other robot agents with the capacity for thought, intention and suffering, we won't really think of them as moral agents. Two matters that have been discussed in the literature here, the moral relevance of machines, and the 'neutrality thesis', are red herrings. The real question is whether machines can be credited with moral responsibility. Moral agency is a precondition for moral responsibility. Machines are morally relevant, and they are so at least partly in virtue of their being agents. But this doesn't make them moral agents. And the fact that the neutrality thesis is unacceptable doesn't indicate otherwise. As our technology develops, we may begin to think of machines as genuinely doing some of the psychological things we now think of them as incapable of doing. But we won't think of them in moral terms until we come to think of them as thinking about moral issues, and as knowing what's right and what's wrong as a result of such thinking. We can't tell whether that day will come.
机译:包括机器人的机器一直是代理商,正在成为越来越自主的代理商。但自主权对道德机构不够。机器可以用故意术语描述,使用Dennett称之为“故意立场”。这还没有使它们完全有意的代理商,因为我们只适用于机器的某些方面或部分故意立场。没有理由认为我们是(又一)开发真正的道德机器或机器人代理商。关于最良好的重量问题的决定有一个具体特征,这意味着我们不会正确地将它们视为所需的道德思维。但是道德姿势,因为我们可能会称之为,也是重要的。人工制品将被赋予道德机构的程度也将受到我们认为他们能够真正痛苦的程度的影响。直到我们愿意信用机器和其他机器人代理,以思想,意图和痛苦的能力,我们不会真正将它们视为道德代理人。这里在文献中讨论的两个事项,机器的道德相关性和“中立论文”是红鲱鱼。真正的问题是机器是否可以归功于道德责任。道德机构是道德责任的先决条件。机器在道德上相关,它们至少部分地借着他们的代理商。但这并没有成为道德代理人。并且中立论证是不可接受的事实并不表明。随着我们的技术发展,我们可能会开始将机器视为真正做一些我们现在认为它们的心理事物,这是无法做到的。但我们不会以道德术语在道德术语中想到他们,直到我们认为他们是关于道德问题的思考,并且知道这一思想的原因是正确的,而且是什么是错误的。我们无法判断当天是否会来。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号