I am preparing for a new course this semester on philosophy and cognitive science. In the third and final part of the class, we are going to read and discuss Moral Machines: Teaching Robots Right from Wrong by Wendell Wallach and Colin Allen (OUP 2009). Needless to say, I am looking forward to that stretch of the course. In the meantime, reading through some of the recent work in robot ethics got me thinking about compatibilist (and semi-compatibilist) accounts of moral desert that appeal to something along the lines of Fischer and Ravizza's reasons responsive mechanisms as a way of grounding moral responsibility even in the face of determinism (and even, perhaps, in the absence of free will).
Wallach and Allen talk about machines that have the capacity to run "ethical subroutines." These subroutines are sensitive to morally salient features of various situations and circumstances--e.g., people's emotions as expressed via their facial expressions. As such, these routines could enable the robot to navigate our moral landscape by making computational decisions that are informed by moral principles--e.g., never harm humans unless harm is unavoidable, in which case, minimize harm--and that are sensitive to morally relevant features of the world. These are so-called "moral machines" (see some nice popular press stuff on related issues in robot ethics here, here, and here)--a morality which Wallach and Allen plausibly claim comes in degrees.
I have two questions for the readers:
First, are these kinds of moral machines--with their ethical subroutines--morally responsible if we adopt a reasons responsive view of agency and desert? Or would we just say that the programmer is responsible for the subroutine? If the latter, how do we define "reasons responsive mechanism" in a way that can't be boiled down computationally into something like a mechanism that runs an ethical subroutine? Second, can people point me in the direction of any recent work that addresses these sorts of issues at the intersection of machine ethics and moral agency? See below the fold for some papers that will now be on the reading list!
For those who are curious, I searched PhilPapers for "robot" + "moral responsibility"--which yielded the following list:
- Thomas Hellström (2013). On the Moral Responsibility of Military Robots. Ethics and Information Technology 15 (2):99-107.This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing (...)
-
John P. Sullins (2006). When is a Robot a Moral Agent. International Review of Information Ethics 6 (12):23-30.In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators (...)
-
Duncan Purves, Ryan Jenkins & Bradley J. Strawser (2015). Autonomous Machines, Moral Judgment, and Acting for the Right Reasons. Ethical Theory and Moral Practice 18 (4):851-872. We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e., it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain (...)
-
Hutan Ashrafian (2015). Artificial Intelligence and Robot Responsibilities: Innovating Beyond Rights. Science and Engineering Ethics 21 (2):317-326. The enduring innovations in artificial intelligence and robotics offer the promised capacity of computer consciousness, sentience and rationality. The development of these advanced technologies have been considered to merit rights, however these can only be ascribed in the context of commensurate responsibilities and duties. This represents the discernable next-step for evolution in this field. Addressing these needs requires attention to the philosophical perspectives of moralresponsibility for artificial intelligence and robotics. A contrast to the moral status of (...)
-
Dante Marino & Guglielmo Tamburrini (2006). Learning Robots and Human Responsibility. International Review of Information Ethics 6:46-51. Epistemic limitations concerning prediction and explanation of the behaviour of robots that learn from experience are selectively examined by reference to machine learning methods and computational theories of supervised inductive learning. Moral responsibility and liability ascription problems concerning damages caused by learning robot actions are discussed in the light of these epistemic limitations. In shaping responsibility ascription policies one has to take into account the fact that robots and softbots - by combining learning with autonomy, pro-activity, (...)
-
Angela Coventry & Joshua Fost (2013). Remaking Responsibility: Complexity and Scattered Causes in Human Agency. Proceedings of the 1st International Conference on Philosophy: Yesterday, Today, and Tomorrow 1. Contrary to intuitions that human beings are free to think and act with “buck-stopping” freedom, philosophers since Holbach and Hume have argued that universal causation makes free will nonsensical. Contemporary neuroscience has strengthened their case and begun to reveal subtle and counterintuitive mechanisms in the processes of conscious agency. Although some fear that determinism undermines moral responsibility, the opposite is true: free will, if it existed, would undermine coherent systems of justice. Moreover, deterministic views of human choice clarify (...)
-
Peter M. Asaro (2006). What Should We Want From a Robot Ethic. International Review of Information Ethics 6 (12):9-16. There are at least three things we might mean by "ethics in robotics": the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots. This paper argues that the best approach to robot ethics is one which addresses all three of these, and to do this it ought to consider robots as socio-technical systems. By so doing, it is possible to think of a continuum of agency that (...)
-
John Danaher (forthcoming). Robots, Law and the Retribution Gap. Ethics and Information Technology. We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap (...)
-
Adriano Fabris, Sergio Bartolommei & Edoardo Datteri (2007). Quale etica per la robotica? Teoria 27 (2):7-17. First of all, in this paper we provide some clarifications on the several meanings of the term ‘ethics’, above all in the light of contemporary discussions on this matter. Then we analyze an important ethical concept, i.e. the concept of moral responsibility, for the sake of clarifying some problems concerning the human-robot relationship. Finally, we try to develop a well defined pattern of “ethics of responsibility” in order to give a general background for resolving concrete dilemmas (...)
I also searched PhilPapers for "robot" + "free will"--which yielded the following list:
-
Matej Hoffmann & Vincent C. Müller (2014). Trade-Offs in Exploiting Body Morphology for Control: From Simple Bodies and Model-Based Control to Complex Ones with Model-Free Distributed Control Schemes. In Helmut Hauser, Rudolf M. Füchslin & Rolf Pfeifer (eds.), Opinions and Outlooks on Morphological Computation. E-Book. pp. 185-194. Tailoring the design of robot bodies for control purposes is implicitly performed by engineers, however, a methodology or set of tools is largely absent and optimization of morphology (shape, material properties of robot bodies, etc.) is lag- ging behind the development of controllers. This has become even more prominent with the advent of compliant, deformable or "soft" bodies. These carry substantial potential regarding their exploitation for control – sometimes referred to as "mor- phological computation" in the sense of (...)
-
Angela Coventry & Joshua Fost (2013). Remaking Responsibility: Complexity and Scattered Causes in Human Agency. Proceedings of the 1st International Conference on Philosophy: Yesterday, Today, and Tomorrow 1. Contrary to intuitions that human beings are free to think and act with “buck-stopping” freedom, philosophers since Holbach and Hume have argued that universal causation makes free will nonsensical. Contemporary neuroscience has strengthened their case and begun to reveal subtle and counterintuitive mechanisms in the processes of conscious agency. Although some fear that determinism undermines moral responsibility, the opposite is true: free will, if it existed, would undermine coherent systems of justice. Moreover, deterministic views of (...)
-
Russell Daw & Torin Alter (2001). Free Acts and Robot Cats. Philosophical Studies 102 (3):345-57.‘Free action’ is subject to the causal theory of reference and thus that The essential nature of free actions can be discovered only by empirical investigation, not by conceptual analysis. Heller ’s proposal, if true, would have significant philosophical implications. Consider the enduring issue we will call the Compatibility Issue : whether the thesis of determinism is logically compatible with the claim that.But I assume more work has been done on this front! Thanks in advance for your thoughts and suggestions!
Thanks for this Thomas. One thing that occurred to me is the issue of how a machine enters the moral sphere in the first place, and I have written about this in a pop culture anthology in the context of discussing Spielberg's movie AI:
https://www.amazon.com/dp/B002TOJHUO/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
My thesis is pretty simple: entry into and acceptance by some social context is everything for an entity to be morally regarded as significant, and AI does a good job (as do some other movies) of doing this at least for the audience. One thing that AI focuses on that is different than other movies I'm aware of is a process of imprinting as the basis for such entry.
I'm not even a stone's throw from claiming authority here--I'm not a phi/mind person. FWIW with caution, and I will email you my chapter; my thought here is that a film might jog students' thoughts.
Posted by: V. Alan White | 01/07/2017 at 08:50 PM
Alan,
Thanks, please do send along the chapter. I, too, was a fan of the movie AI. If we had more time in the class, there are tons of movies we could watch. That said, I should say a bit more about my admittedly vague questions in the post. My thoughts were as follows: If we want to push responsibility for ethical subroutines back to the people who wrote them rather than blaming the machines who run them, then I want to let genetics, evolution, and socialization stand in for the programmer in the case of humans. How are the programmers themselves responsible for their own ethical subroutines? After all, the ethical subroutines of the programmers dictate how they decide to write the ethical subroutines for the machines.
Posted by: Thomas Nadelhoffer | 01/08/2017 at 06:02 AM
Thomas--
Apologies and thanks--I read your post more broadly than that. I'll give the more restricted thesis a bit more thought and see if I can help. But again, I'm a bit out of my comfort zone here.
Posted by: V. Alan White | 01/08/2017 at 12:08 PM
Joshua Shepherd has the x-phi paper showing that people think that robots can have free will (and I think, be responsible) if they behave just like us, but only if they are described as being (phenomenally) conscious. There's also some other interesting results (e.g., about whether people say such robots are possible).
This post is really timely for me, since in my intro class I always do a trial of a robot (who kills his creator) to discuss theories of mind and free will/responsibility. I usually just use standard stuff (Searle vs. Dennett, Galen Strawson, Wolf, etc.), but if anyone knows of more useful stuff from the readings Thomas found or others *that would be accessible for intro students*, let me know!
Posted by: Eddy Nahmias | 01/08/2017 at 06:05 PM
Thomas,
I suggest putting Marino & Tamburrini at the top of your reading list. From their abstract, it sounds like their issues are the ones I would urge. Modern AI systems are all heavy with machine learning, which makes the programmer less and less relevant as the machine is trained up. Focusing on the programmer will take your eye away from where most of the action is. A good starting point might be https://en.wikipedia.org/wiki/Supervised_learning
Danaher raises an interesting point (maybe this is not what he meant...). I expect robots to be full blooded agents in a century or so, but unlike humans, mammals, or birds, they might not care much about any "punishment" you can throw at them. That could create a conundrum.
Posted by: Paul Torek | 01/08/2017 at 07:54 PM
Thomas, thanks for your post on this. Your questions are the right ones to be asking and I appreciate your citations. I'm beginning to work in this area and I don't know all these papers, so your post is super helpful already!
To your first question: I don't think it's enough for the machines to be reasons-responsive to morally salient features of their environment. Additionally, I would argue, the machine must (at least) be responsive to the practices of moral responsibility (praise, blame, etc) such that it is capable of learning from those practices in a way that affects its ethical subroutines in the future. I say that b/c I think the point of our MR practices is to incentivize learning different, or modified ethical subroutines. So if a creature, or machine, can't learn new routines from the practices of MR it doesn't make sense to hold that creature or machine morally responsible. That's the high-level account of my view, but the general idea is just that the machines must have the capacities that our MR practices aim at impacting. If the machine doesn't have those capacities--for example, it can't learn new subroutines from the applications of the practices of MR to it--then I would say that the programmer is responsible, not the machine.
To your second question: one paper you absolutely have to have on your list is Robert Sparrow's "Killer Robots." Sparrow's paper is approaching 250 citations and has had a big impact on the literature (the literature I know of, at least). Sparrow's (super interesting!) thesis is that no one can legitimately be help responsible for war crimes committed by an autonomous weapon system, so it would be unethical to deploy such systems in wartime. The rest of the citation: Journal of Applied Philosophy 24, no. 1 (February 1, 2007): 62–77. doi:10.1111/j.1468-5930.2007.00346.x
As it happens, I'm working on a paper in response to Purves, Jenkins, and Strawser, where I argue that recent results in using machine learning via deep neural networks suggests that future AI will be able to act for moral reasons and make moral judgments. The paper has to be ready for a couple deadlines by Jan 15. If you're interested, shoot me an email and I'll send it to you when that version has taken shape.
Posted by: Zac Cogley | 01/08/2017 at 09:04 PM
via the magic of Google Scholar
http://link.springer.com/chapter/10.1007/978-3-319-17873-8_16
https://www.pdcnet.org/pdc/bvdb.nsf/purchase?openform&fp=techne&id=techne_2014_0999_2_4_9
http://www.ingentaconnect.com/content/mcb/jices/2015/00000013/00000002/art00002
Posted by: David Duffy | 01/08/2017 at 11:57 PM
I don't have any well worked out thoughts on this yet, but thanks for compiling this great reading list! My first inclination is to say that the right kind of machine could rise to the level of moral agency, but this raises a lot of interesting issues.
Posted by: Ryan Lake | 01/11/2017 at 10:00 AM
Eddy: "...people think that robots can have free will (and I think, be responsible) if they behave just like us, but only if they are described as being (phenomenally) conscious."
It would seem that the capacity for sentience (having conscious experience) is implicated in supposing an entity is an appropriate target for our *moral* responsibility practices. Punishments are ordinarily intended to produce at least some experienced dysphoria, which on retributive views is deserved payback, and on consequentialist views an incentive to do better next time (of course some consequentialists don't believe an agent need be morally responsible to be a legitimate target of (undeserved) punishment, only reason-responsive). A robot might have the capacity to do better in response to rewards and sanctions without any capacity for consciousness, so we could hold it responsible practically speaking, but perhaps not morally responsible.
Given our retributive natures, for now when robots break bad we'll be inclined to hold morally responsible the robot's human creators, since we know they're sentient. When we're convinced that smart, behaviorally flexible AIs are sentient, we'll be happy to punish them since we'll suppose they deserve it. Sad!
Here's what I think is a pretty accessible piece published in the Lahey Clinic Medical Ethics Journal on holding mechanisms (like and unlike ourselves) responsible, http://www.naturalism.org/philosophy/morality/holding-mechanisms-responsible
Posted by: Tom Clark | 01/11/2017 at 10:26 AM
You've probably seen the draft report (Jun 2016 version on-line) that was just accepted this week by the European Parliament Committee on Legal Affairs: “The most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause.”
Posted by: David Duffy | 01/14/2017 at 12:26 AM