Machines, including robots, have always been agents and are becoming increasingly autonomous agents. But autonomy isn't sufficient for moral agency. Machines can be described in intentional terms, using what Dennett calls 'the intentional stance'. This doesn't yet make them fully intentional agents, for as yet we only apply certain aspects or parts of the intentional stance to machines. There's no reason to think that we are (yet) developing genuinely moral machine or robot agents. There's a specific feature of decisions about the most morally weighty issues which means that we won't rightly think of them as having done the moral thinking required. But moral patiency, as we might call it, matters, too. The extent to which artefacts will be credited with moral agency will also be affected by the extent to which we think of them as capable of genuine suffering. Until we are willing to credit machines and other robot agents with the capacity for thought, intention and suffering, we won't really think of them as moral agents. Two matters that have been discussed in the literature here, the moral relevance of machines, and the 'neutrality thesis', are red herrings. The real question is whether machines can be credited with moral responsibility. Moral agency is a precondition for moral responsibility. Machines are morally relevant, and they are so at least partly in virtue of their being agents. But this doesn't make them moral agents. And the fact that the neutrality thesis is unacceptable doesn't indicate otherwise. As our technology develops, we may begin to think of machines as genuinely doing some of the psychological things we now think of them as incapable of doing. But we won't think of them in moral terms until we come to think of them as thinking about moral issues, and as knowing what's right and what's wrong as a result of such thinking. We can't tell whether that day will come.
展开▼