Blog Coordinator

« Looking Forward! | Main | Near Death Experiences »

01/07/2017

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Thanks for this Thomas. One thing that occurred to me is the issue of how a machine enters the moral sphere in the first place, and I have written about this in a pop culture anthology in the context of discussing Spielberg's movie AI:

https://www.amazon.com/dp/B002TOJHUO/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1

My thesis is pretty simple: entry into and acceptance by some social context is everything for an entity to be morally regarded as significant, and AI does a good job (as do some other movies) of doing this at least for the audience. One thing that AI focuses on that is different than other movies I'm aware of is a process of imprinting as the basis for such entry.

I'm not even a stone's throw from claiming authority here--I'm not a phi/mind person. FWIW with caution, and I will email you my chapter; my thought here is that a film might jog students' thoughts.

Alan,

Thanks, please do send along the chapter. I, too, was a fan of the movie AI. If we had more time in the class, there are tons of movies we could watch. That said, I should say a bit more about my admittedly vague questions in the post. My thoughts were as follows: If we want to push responsibility for ethical subroutines back to the people who wrote them rather than blaming the machines who run them, then I want to let genetics, evolution, and socialization stand in for the programmer in the case of humans. How are the programmers themselves responsible for their own ethical subroutines? After all, the ethical subroutines of the programmers dictate how they decide to write the ethical subroutines for the machines.

Thomas--

Apologies and thanks--I read your post more broadly than that. I'll give the more restricted thesis a bit more thought and see if I can help. But again, I'm a bit out of my comfort zone here.

Joshua Shepherd has the x-phi paper showing that people think that robots can have free will (and I think, be responsible) if they behave just like us, but only if they are described as being (phenomenally) conscious. There's also some other interesting results (e.g., about whether people say such robots are possible).

This post is really timely for me, since in my intro class I always do a trial of a robot (who kills his creator) to discuss theories of mind and free will/responsibility. I usually just use standard stuff (Searle vs. Dennett, Galen Strawson, Wolf, etc.), but if anyone knows of more useful stuff from the readings Thomas found or others *that would be accessible for intro students*, let me know!

Thomas,

I suggest putting Marino & Tamburrini at the top of your reading list. From their abstract, it sounds like their issues are the ones I would urge. Modern AI systems are all heavy with machine learning, which makes the programmer less and less relevant as the machine is trained up. Focusing on the programmer will take your eye away from where most of the action is. A good starting point might be https://en.wikipedia.org/wiki/Supervised_learning

Danaher raises an interesting point (maybe this is not what he meant...). I expect robots to be full blooded agents in a century or so, but unlike humans, mammals, or birds, they might not care much about any "punishment" you can throw at them. That could create a conundrum.

Thomas, thanks for your post on this. Your questions are the right ones to be asking and I appreciate your citations. I'm beginning to work in this area and I don't know all these papers, so your post is super helpful already!

To your first question: I don't think it's enough for the machines to be reasons-responsive to morally salient features of their environment. Additionally, I would argue, the machine must (at least) be responsive to the practices of moral responsibility (praise, blame, etc) such that it is capable of learning from those practices in a way that affects its ethical subroutines in the future. I say that b/c I think the point of our MR practices is to incentivize learning different, or modified ethical subroutines. So if a creature, or machine, can't learn new routines from the practices of MR it doesn't make sense to hold that creature or machine morally responsible. That's the high-level account of my view, but the general idea is just that the machines must have the capacities that our MR practices aim at impacting. If the machine doesn't have those capacities--for example, it can't learn new subroutines from the applications of the practices of MR to it--then I would say that the programmer is responsible, not the machine.

To your second question: one paper you absolutely have to have on your list is Robert Sparrow's "Killer Robots." Sparrow's paper is approaching 250 citations and has had a big impact on the literature (the literature I know of, at least). Sparrow's (super interesting!) thesis is that no one can legitimately be help responsible for war crimes committed by an autonomous weapon system, so it would be unethical to deploy such systems in wartime. The rest of the citation: Journal of Applied Philosophy 24, no. 1 (February 1, 2007): 62–77. doi:10.1111/j.1468-5930.2007.00346.x

As it happens, I'm working on a paper in response to Purves, Jenkins, and Strawser, where I argue that recent results in using machine learning via deep neural networks suggests that future AI will be able to act for moral reasons and make moral judgments. The paper has to be ready for a couple deadlines by Jan 15. If you're interested, shoot me an email and I'll send it to you when that version has taken shape.

via the magic of Google Scholar

http://link.springer.com/chapter/10.1007/978-3-319-17873-8_16
https://www.pdcnet.org/pdc/bvdb.nsf/purchase?openform&fp=techne&id=techne_2014_0999_2_4_9
http://www.ingentaconnect.com/content/mcb/jices/2015/00000013/00000002/art00002

I don't have any well worked out thoughts on this yet, but thanks for compiling this great reading list! My first inclination is to say that the right kind of machine could rise to the level of moral agency, but this raises a lot of interesting issues.

Eddy: "...people think that robots can have free will (and I think, be responsible) if they behave just like us, but only if they are described as being (phenomenally) conscious."

It would seem that the capacity for sentience (having conscious experience) is implicated in supposing an entity is an appropriate target for our *moral* responsibility practices. Punishments are ordinarily intended to produce at least some experienced dysphoria, which on retributive views is deserved payback, and on consequentialist views an incentive to do better next time (of course some consequentialists don't believe an agent need be morally responsible to be a legitimate target of (undeserved) punishment, only reason-responsive). A robot might have the capacity to do better in response to rewards and sanctions without any capacity for consciousness, so we could hold it responsible practically speaking, but perhaps not morally responsible.

Given our retributive natures, for now when robots break bad we'll be inclined to hold morally responsible the robot's human creators, since we know they're sentient. When we're convinced that smart, behaviorally flexible AIs are sentient, we'll be happy to punish them since we'll suppose they deserve it. Sad!

Here's what I think is a pretty accessible piece published in the Lahey Clinic Medical Ethics Journal on holding mechanisms (like and unlike ourselves) responsible, http://www.naturalism.org/philosophy/morality/holding-mechanisms-responsible

You've probably seen the draft report (Jun 2016 version on-line) that was just accepted this week by the European Parliament Committee on Legal Affairs: “The most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause.”

The comments to this entry are closed.

Categories