This past semester I was working with Fiery Cushman and an RA (Becca Ramos) on some studies on manipulation and moral responsibility and we ended up with some findings that I'm completely puzzled by, so I thought I'd see if you all have any ideas about them. (In case you want to read more on the background x-phi work on manipulation, you can find some of that: here, here, here, and here).
One of the things I wanted to test was the intuitive idea that a critical aspect of being manipulated was not knowing that you were being manipulated. Intuitively, it seems like one is more responsible for doing an immoral action when you are aware that the environment has been set up so that you'll do that action. To test this basic idea, we designed a relatively simple study.
In one condition, the manipulated agent didn't know they were being manipulated:
In the 1950s, the government of a small Eastern European country plotted to secretly start a war, using industrial workers, and get revenge on a neighboring country. For the first part of their plan, the government intentionally destroyed farm machinery and set fire to several food stores on purpose. As a result, there was a serious lack of food in the country. Soon the people living in the city couldn't get enough food to feed themselves. The whole city shut down, crime skyrocketed and a small but violent uprising broke out.
The government knew their plan was working perfectly. Right at that time, a group of industrial workers heard on the government news channel that a neighboring village had a surplus of food. After hearing the news, the group of industrial workers raided the small village on the country's border, stealing food from the farmers and killing innocent people. The government had known this would happen all along and felt great about their successful plan.
In the other condition, the agent did know that they were being manipulated (differences in italics):
In the 1950s, the government of a small Eastern European country plotted to secretly start a war, using industrial workers, and get revenge on a neighboring village. For the first part of their plan, the government intentionally destroyed farm machinery and set fire to several food stores on purpose. As a result, there was a serious lack of food in the country. Soon the people living in the city couldn't get enough food to feed themselves. The whole city shut down, crime skyrocketed and a small but violent uprising broke out. The government knew their plan was working perfectly.
Not long after, a couple of the industrial workers were informed by some government employees about the government’s plan to use the workers to attack the neighboring village.
Right at that time, a group of industrial workers heard on the government news channel that a neighboring village had a surplus of food. After hearing the news, the group of industrial workers raided the small village on the country's border, stealing food from the farmers and killing innocent people. The government had known this would happen all along and felt great about their successful plan.
We also included a condition in which the agent was not being manipulated (the government had no intention for the workers to attack the village). We used five different scenarios in total. Each participant read a single vignette and then was asked whether they agreed or disagreed with a statement of the form:
- The workers should be blamed for the damage to the neighboring village.
Here's the pattern of results we got:
We also conducted a replication of this study that additionally included two new questions. Participants were also asked whether they agreed or disagreed with statements that had the following form:
- The workers had to attack the neighboring village.
- The government should be blamed for the damage to the neighboring village.
Here's the pattern of results we got for all three measures:
In sum, we continue to find that being manipulated affects judgments of blame for the manipulated agent, blame for the manipulator, and also whether the agent's 'had to' do the action, but we find no effect of knowledge on any of these measures! Well... I'd love to hear any ideas you all might have about what's going on. I'm a little surprised by these findings.
(Also happy to share, stimuli, data, and experimental protocols if anyone is interested in this stuff.)
First, the prompts seem to waffle between the words "country," "city," and "village," which might have confused the participants a bit.
More importantly, I'm not sure the Knowledge condition does what you think it does. You say a couple industrial workers learn about the government's plan, but it's not made clear if those same workers were the ones doing the raiding or if the ones doing the raiding also knew about the plan.
Also, if government employees tell the industrial workers about the plan, is it manipulation at that point? It seems more like enlisting the workers as soldiers than like using them as pawns.
Finally, when I first read, "For the first part of their plan, the government intentionally destroyed farm machinery and set fire to several food stores on purpose. As a result..." I thought the government destroyed the infrastructure of the *neighboring* country. After all, that's who the government wanted revenge on.
Posted by: Nathan Nguyen | 01/11/2016 at 08:34 PM
Hi Jonathan:
I'd like to suggest the following very tentative hypotheses:
1. Manipulation per se is not the factor driving the judgments, but a difference in the assessment of the situation that is associated with it.
More precisely, in order to assess whether the workers should be blamed, participants make intuitive - and in many or most cases, unconscious - probabilistic assessments about a number of variables not specified in the scenario, e.g. were they going to starve to death if they didn't raid the other village? Could they save their children? Did they attempt to ask the government for help first? If so, were they turned down?
The percentage of participants who intuitively reckoned the situation probably was so dire that the workers who attacked were not at fault, is greater than the percentage who made a similar assessment in the case in which there was no ill-intent on the part of the government. The percentage in question isn't affected by considerations such as whether the workers knew about the manipulation.
2. Nearly all participants reckoned there probably was no time for the two workers to let the others know, or if there was, they still didn't say anything, maybe due to fear - i.e., they reckoned that the vast majority probably didn't know they were being manipulated.
Also, perhaps some participants were willing to blame those two but not the rest, and they didn't have that option, so they chose not to blame the workers. Some other participants may have reckoned that probably those two justifiably feared for their lives if they talked, so they weren't to blame for their silence, either.
3. A combination of 1. and the first part of 2.
Posted by: Angra Mainyu | 01/11/2016 at 08:39 PM
Nathan, thanks for pointing this stuff out! I think you're totally right that these things might be confusing about that particular scenario. One way we tried to address this sort of worry was by using 5 different scenarios which differed in tons of ways from each other (e.g., some were about a mother-in-law who gets her daughter-in-law to break into a pharmacy, one was about a fisherman who steals another person's boat to escape an oncoming flood, and so on. The idea behind doing this is that even if each individual scenario has a few problems, it won't be the case that all of the scenarios have the same problems, so if we find an effect across all of the scenarios, we can be pretty sure it was not due to any one particular small flaw.
The pattern we found was pretty robust across these different scenarios: for each of the five different scenarios we used, we found no effect of knowledge, but an effect of whether or not the agent was manipulated. In case it's helpful, I've put up all 15 of the vignettes we used (5 scenarios, each with 3 conditions) here: https://www.dropbox.com/s/2ddij6269ziwq22/Scenarios.pdf?dl=0
These comments are really helpful though, and I'll definitely keep them in mind when we use this same scenario in future studies!
Posted by: Jonathan Phillips | 01/12/2016 at 10:55 AM
I don't find that surprising really - for me there is no intuition that carrying out an act that you were manipulated into is more immoral if you are aware of the manipulation UNLESS there was a way to escape the circumstances of the manipulation and you chose not to. In this example, the knowledge that they were being manipulated didn't change the circumstances of not having food. The situation they were in was exactly the same, and the knowledge of how they got there was irrelevant. It would change the situation if they learned that they had been manipulated with lies - that the village had no surplus or that there were supplies nearby that they could acquire without bloodshed.
(On a completely tangential note, I wonder how these results would tie in with other data on nudging and manipulative effects which show an increase in effectiveness once they are known about)
Posted by: Ambivalent PhD | 01/12/2016 at 04:07 PM
Angra, this is a really nice set of hypotheses, and I think you're right that, taken together, they would explain why knowledge is not relevant here.
I actually had a similar question about whether the basic effect of manipulation could be explained by differences in the perceived situational constraint faced by the agent. To test this possibility, we asked participants two questions: (1) whether they agreed that the 'manipulator' (e.g., the government) *made* the agent do the immoral action and (2) whether they agreed that they the *situation caused* the agent to do the immoral action. What we ended up finding kind of surprising. Participants more agreed that the manipulator made the agent do the immoral action when the manipulator acted with the intention of getting the agent to do the action. However, there was no corresponding difference in judgments about situational constraint. That is, they didn't think the situation more caused the agent to do the immoral action when the manipulator acted intentionally. The interaction was actually pretty large too (if you want to see the details of that study or the graph of the results, you can find it here: http://people.fas.harvard.edu/~phillips01/papers/Phillips_Manipulating_Morality.pdf#page=31 ).
So basically, I agree that we shouldn't expect knowledge to play much of a role if the original effect were just due to straightforward situational constraint. However, I was also thinking that since the original effect doesn't seem to be about this, but instead about whether the manipulator has the intention of getting the agent do the immoral action, then the agent's knowledge might matter for her moral responsibility. I guess I was wrong though!
Anyway, I'd love to hear what you think of this response, any further thoughts you have on the study I just mentioned, or even other ways of trying to make sense of all of this!
Posted by: Jonathan Phillips | 01/12/2016 at 05:23 PM
Cool idea Ambivalent! If I'm understanding your suggestion correctly, your basic thought is that knowledge really might affect the moral responsibility of manipulated agents, but it would only do so when the agent has some way of escaping the situation that the manipulator has put them in. I think you could really be onto something here, and it'd be cool to figure out a way to test it. Can you think of a way of changing any of the scenarios we've used so far to implement this basic idea? I'd be willing to run a study on this to see if it's right -- I think it could be really helpful in understanding the psychological processes behind manipulation/moral responsibility.
Here's a link to the scenarios again: https://www.dropbox.com/s/2ddij6269ziwq22/Scenarios.pdf?dl=0#sthash.DHAU2SC0.dpuf
Also, I really haven't thought much about how these studies might tie in with the nudge literature, but I agree it's worth thinking about more. Does anyone know if there is already work on whether people see nudges as reducing people's moral responsibility? It seems like the sort of thing has probably been done, no?
Posted by: Jonathan Phillips | 01/12/2016 at 05:52 PM
Jonathan, thanks for the thorough reply. I that raises a lot of issues, and I'll need to think more about a number of them. For now what I'm thinking is:
First, hypothesis 2. in my previous reply doesn't seem affected - though I don't consider it a probable explanation.
Second, I'm not sure that asking participants whether the situation caused the agent to do the immoral action is an effective way of assessing those participants think the situation was more dire, and for that reason, it justified the behavior. At least in my assessment, the situation was always one of multiple causes, but I don't see why this should affect moral judgments (but there is the issue of mistaken moral assessments; I say more below).
Btw, personally. I think a person is to blame for some action A iff A was an immoral action. But I don't think this plays a role in the study, given that the question was whether the situation caused them to raid the village (without stating that the action was immoral).
Third, there is the question of moral disagreement among participants, and moral errors.
More precisely, some participants consider the manipulated agents blameworthy, but others don't. Also, assessments of the level of blameworthiness seem to vary (if I got that right) among those who attribute blame.
One might wonder what causes the disagreements. Given that all participants (one should think) understood the scenario and agree about the actions of the government, the workers, etc., that were stated in it, it seems to me either:
a. Different people are assessing the probability of some of the other non-moral facts of the situation differently, or
b. They disagree on whether manipulation reduces or blocks blame, but not because of non-moral facts related to the situation.
If it's a, then hypothesis 1. in my first post still seems likely to me (I'm not sure what the alternative would be, if a. holds), even if this isn't related to the participants' assessment of whether the situation caused them to act like that.
If it's b, then it seems some of the participants are making erroneous moral assessments about the influence of manipulation (this isn't to say such assessments would remain erroneous after further consideration, discussion, etc.).
In that context, in my view manipulation doesn't matter (on a proper assessment), but a certain percentage of people are inclined to mistakenly believe it does, and those who are so inclined are usually similarly inclined to think knowledge of the manipulation doesn't matter; the manipulated people are still not to blame (or deserve less blame).
That said, I take you're trying to assess people's moral judgments, but without assessing whether they're correct (or so it seems to me), so the third point may not apply.
Fourth, with regard to Ambivalent PhD's reply, if I understand it correctly (Ambivalent, please let me know if I got it wrong), it seems Ambivalent also has the intuition that manipulation per se does not matter. Rather, what matters is what options are available to the agent (as far as the agent knows, that is). That seems plausible to me.
However, if we hold that - in light of the causal attributions to the situation - the moral assessments of a certain percentage of the participants take into account the intentions of the government (and not just other probable effects of those intentions, like changes in the situation of the workers), then perhaps even a scenario involving knowledge of the manipulation that gives the workers a way out would not entirely cancel the effect of the manipulation in terms of attributions of blame.
So, for example, probably knowledge of the manipulation that gives the workers a way out will result in increased blame with respect to the workers who weren't told about the manipulation, but they still would get less blame than workers who weren't manipulated and had a similar way out. I suppose that that can be tested, by explaining the way out in some of the scenarios.
Posted by: Angra Mainyu | 01/13/2016 at 12:05 AM
I think that the results are only surprising if we take the presence of a manipulator to undermine responsibility because it would undermine some basic requirement for free will and moral responsibility in general (e.g. being the source of one's actions, having the ability to do otherwise). In this framework, because there is an epistemic side to moral responsibility and freedom of action, we would expect knowledge to have an effect.
However, there is an alternate interpretation of Phillips's findings (the one already published). That the presence of a manipulator decreases blame only because "blame" is something that is shared: manipulated persons are perceived as less blameworthy only because the manipulator takes part of the blame (and not because its presence undermines some responsibility-underwriting capacity). Under this interpretation, it makes sense for knowledge to have no effect.
Posted by: Florian Cova | 01/14/2016 at 09:04 AM
Florian, Nice point - you're obviously totally right that the effects here are only surprising if we took the reduction in moral responsibility between the manipulation and no-manipulation cases to be due to an undermining of some variable relevant to the agent's free will.
You're alternative suggestion for what may be explaining the reduction of moral responsibility is a good one. I was actually really worried about that alternative explanation as well, and so I ran a series of studies to test it. You can see the results of those here (Study 4a and 4b): http://people.fas.harvard.edu/~phillips01/papers/Phillips_Manipulating_Morality.pdf#page=21
In brief, what we end up finding is that the difference in participants' judgments doesn't seem to arise because they are attributing more blame to the manipulator. Rather, the reduction seems to be because they more think the agent was controlled by the manipulator (which suggests that the reduction in moral responsibility is due to undermining something relevant to free will). The studies in the paper above should make this much clearer.
I probably should have said this at the very beginning but the reason I used these particular cases was because they had previously been shown to be pretty clear cases or reduced moral responsibility from free-will undermining manipulation. The thought was that now that we have such cases, giving the agent knowledge, should really reduce the effect of manipulation on moral responsibility, but that's exactly what I didn't find.
Anyway, I'd love to hear your thoughts about this response. I am definitely a little stumped, and the pattern here is definitely making me question whether there might be some other potential explanation of the original effect of manipulation (even if it's not a redistribution of blame).
Posted by: Jonathan Phillips | 01/14/2016 at 01:58 PM
Weird (I think, though I like a number of the points that have been made). There are also results out there that your findings seem to be surprising in light of – e.g., Cameron, Payne, and Knobe (http://www.unc.edu/~dcameron/cameronpayneknobe.pdf) find that implicit biases are seen as more responsibility-undermining when they operate unconsciously than automatically (out of one’s control), suggesting that (some) knowledge condition might be even more central than a control condition. There’s also “Which Nudges do People Like?” (http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2619899) and some other survey work by Sunstein that speaks toward Ambivalent’s question. There must be other stuff out there, too.
In terms of figuring out what’s going on, I like the idea of trying to tease out just why we would’ve thought that knowledge of the manipulation would matter in the first place – and then making sure the scenarios include it. I find it antecedently plausible that it’s something in the vicinity of awareness of open alternatives, as Ambivalent suggested. Plausibly, knowing of the manipulation amounts to knowing of a way in which one might do otherwise – namely, if one were not manipulated. So if whatever the agents gained knowledge of in your scenarios didn’t include those epistemic possibilities for some reason, then I wouldn’t find the result surprising. After looking at the scenarios, though, it seems like most of them pretty clearly do (in some scenarios perhaps not at the time of the eventual action, but at some point in the past leading up to it).
So I don’t know: Maybe it’d still be worth trying to make this aspect of the scenario even more explicit, and seeing if there’s still no effect – maybe including more on how the manipulees must be deliberating about things. Something like: “Not long after, a couple of the industrial workers were informed by some government employees about the government’s plan to use the workers to attack the neighboring village (who then told all of the other workers). The workers realized this meant that if the government’s plan wasn’t followed, things could go differently.
Shortly after, a group of industrial workers heard on the government news channel that a neighboring village had a surplus of food. Even though they knew this was part of the government’s plan to use them, the group of industrial workers raided the small village on the country's border anyway, stealing food from the farmers and killing innocent people.”
Posted by: Dylan Murray | 01/14/2016 at 07:58 PM
Dylan, yeah I totally agree that the right way to try to figure this out is to think carefully about why we would have thought knowledge would matter in the first place and then begin testing whether knowledge is affecting the things we think it should be.
I was thinking along the lines of it mattering for the extent to which we represent alternative courses of actions as being open to the manipulated agent. (This was why I included the question about 'had to' in the second study. Of course, it turned out not to influence answers on this question either, and so the remaining question is *why* that would be the case.)
Anyway, I liked your idea for a follow-up study using deliberation to make the alternatives clearer. My one worry about that approach though is that we would want to make sure it's the agent's knowledge (not the deliberation itself) that is driving any results. To do this, I think we'd have to include the agent deliberating in all three cases, and then see if knowledge+deliberation would be enough to change the effect of manipulation. Does that seem right to you?
Posted by: Jonathan Phillips | 01/18/2016 at 05:03 PM
Jonathan, great idea to test the question of whether knowledge of manipulation 'frees' the manipulee. My intuition is that it does to the extent it opens up otherwise precluded options for the manipulee to resist the manipulation (and I predict the folk would share my intuition). I need to look more closely at your study and the comments above, but I suspect that your cases may be presenting the manipulation as too weak even in the 'no knowledge' cases, such that you aren't driving down the 'had to' responses or the 'blame' responses very much, leaving the knowledge factor less 'work' to do. (Maybe your other cases are stronger.) The workers are manipulated because their conditions are created by the gov't, but they still freely decide, based on their own desires, etc. to attack given these conditions of hunger, etc. I suspect if you used cases that more clearly 'bypass' the manipulees' mental states, you'd get a bigger effect of knowledge, since it would seem more plausible that the manipulees would know that their mental states were being manipulated and might be able to (at least try to) get them 'back online'. Just some initial thoughts.
Posted by: Eddy Nahmias | 01/21/2016 at 11:55 AM
Hey Jonathan, I think you’re right about the deliberation idea. It’s at least a further, more specific question. The issue you raise about holding things fixed across conditions might raise more general difficulties, though. If we take mentioning their mental states (or even the information they’re informed of) in the knowledge condition to require mentioning its absence in the original condition, will that always be weird? “The agent doesn’t know that if she weren’t manipulated, then she might…” Of course. She has no clue she’s being manipulated! Maybe that sort of thing – just what exactly they (don’t) know – can’t really be held completely fixed due to pragmatic constraints in any event?
Anyway, it seems like you could talk about the agents’ epistemic states in the knowledge condition as a way of ramping up the IV without mentioning deliberation – e.g., reminding people that the agent(s) currently know this, at the time of action (or did during some previous decision), but not say anything about how they use that information. Maybe a little “they knew that they could do something else – namely, if they weren’t influenced by the manipulators in the way they were – and they did it anyway.” Describing the agents’ mental states (rather than merely the information they’re given) seems like it’d make it easier to directly test the hypothesis that what matters is specifically knowledge of *open alternatives*, too, which I’d definitely be interested to see results on (though I guess you could have the people who inform the agents that they’re being manipulated also tell them about what specific alternatives this involves).
(One more: I find it tricky to know what language to use here. Anything factive might imply too much about the metaphysics – that new possibilities really are open that wouldn’t otherwise be, or at least make participants focus on that. On the other hand, ‘thinks’ and ‘believes’ might imply that the agent is mistaken, and presumably it’s only genuinely open alternatives that matter… Maybe: “The agent discovered/knew she was manipulated. As a result, she understood that if she wasn’t influenced in this way, she might do something else - namely, etc. etc. But she did it anyway.” Saying she knows that she’s manipulated is fine. And maybe ‘understanding’ a mere conditional makes the second sentence neutral enough.)
Posted by: Dylan Murray | 01/28/2016 at 01:35 AM
Eddy,
Interesting thought! I had exactly the opposite intuition -- that these cases were so constraining that it didn't matter whether or not the agent had knowledge because they couldn't have done otherwise regardless. I'm intrigued by your idea though.
I definitely agree with your thought that the cases I used aren't cases of bypassing -- the agents' decisions are always made through their normal decision making processes.
Do you have a set of cases we could try to use that tend to involve bypassing (or better yet, encourage some participants to perceive the agent as bypassed, while leaving others seeing the agent as not being bypassed)? It seems like if we could find a set of cases like this, and we introduced knowledge of manipulation, then we could use the difference in perception of bypassing as a pretty clean test of the hypothesis you had. I'm thinking you and Dylan might have had some cases which did this -- is that right? What do you think about this approach?
Posted by: Jonathan Phillips | 02/05/2016 at 04:24 PM
Dylan - okay cool, I see the basic idea now. This definitely seems like it's possible to do in a study, and I agree that as long as the knowledge of the alternative possibility is of the conditional form (if I had not been manipulated by B, then I could have not done p), then it's not problematic to include it.
It seems like the minimal pair we'd want is to include that conditional statement in both conditions, but only in one of them to have the agent know that conditional to be true. This should help us avoid any changes in participants' perceptions about the metaphysics of the scenario. Does that seem right to you?
I was also wondering what you thought of Eddy's idea and my response to it. Between his idea and yours, it seems like we have a few further things we could try testing out. What do you think? If you guys are game, we could all put our heads together on this.
Posted by: Jonathan Phillips | 02/05/2016 at 04:41 PM
Awesome. Yeah, I think that is the way to do the knowledge of the conditional, Jonathan. State it in both conditions and just include that the person knows it in the new ones.
I might have both your and Eddy’s intuition (hey Eddy!). In the paper with Tania, we end up distinguishing external manipulation of the manipulee’s environment from internal manipulation of her psyche, but the latter types of case we use are just bypassing. And of course, external manipulation is typically mediated through psychology. But I suspect the type and salience of just how the manipulees’ mental states might be constrained has an independent effect. The scenarios you’ve used might be high on external constraint but still leave internal constraint low.
Tania’s and my studies were designed to build in the bypassing, so probably aren’t any help in getting decent numbers of subjects to go either way. There are the near-midpoint responses for the NMNT abstract condition in Eddy’s and my paper, but those don’t involve manipulation… I’ll try thinking about scenarios that’d incorporate both of those variables, though. I do think it’s basically that type of manipulation that’s most threatening, and so most dissipate-able by knowledge (though looking for any interaction could also be a follow-up).
Posted by: Dylan Murray | 02/06/2016 at 03:07 PM
I'll think about these issues more too. I'll look at some unpublished stuff Dylan and I have on manipulation cases to see if they help. I think they are brain manipulation cases. And Shepard, Reuter, and I used brain manipulation cases in our Cognition paper, and pushed down responses well below the midpoint on questions about free will, responsibility, choice, causation, and control.
Posted by: Eddy Nahmias | 02/07/2016 at 10:57 AM