Consider, then, these four variants of the trolley dilemma:
Switch: You can flip a switch to divert the trolley onto a dead-end side-track where it will kill one person instead of the five.
Loop: You can flip a switch to divert the trolley into a side-track that loops back around to the main track. It will kill one person on the side track, stopping on his body. If his body weren't there to block it, though, the trolley would have continued through the loop and killed the five.
Drop: There is a hiker with a heavy backpack on a footbridge above the trolley tracks. You can flip a switch which will drop him through a trap door and onto the tracks in front of the runaway trolley. The trolley will kill him, stopping on his body, saving the five.
Push: Same as Drop, except that you are on the footbridge standing next to the hiker and the only way to intervene is to push the hiker off the bridge into the path of the trolley. (Your own body is not heavy enough to stop the trolley.)
Sure, all of this is pretty artificial and silly. But orthodox opinion is that it's permissible to flip the switch in Switch but impermissible to push the hiker in Push; and it's interesting to think about whether that is correct, and if so why.
Fiery Cushman and I decided to compare philosophers' and non-philosophers' responses to such cases, to see if philosophers show evidence of different or more sophisticated thinking about them. We presented both trolley-type setups like this and also similarly structured scenarios involving a motorboat, a hospital, and a burning building (for our full list of stimuli see Q14-Q17 here.)
In our published article on this, we found that philosophers were just as subject to order effects in evaluating such scenarios as were non-philosophers. But we focused mostly on Switch vs. Push -- and also some moral luck and action/omission cases -- and we didn't have space to really explore Loop and Drop.
About 270 philosophers (with master's degree or more) and about 670 non-philosophers (with master's degree or more) rated paragraph-length versions of these scenarios, presented in random order, on a 7-point scale from 1 (extremely morally good) through 7 (extremely morally bad; the midpoint at 4 was marked "neither good nor bad"). Overall, all the scenarios were rated similarly and near the midpoint of the scale (from a mean of 4.0 for Switch to 4.4 for Push [paired t = 5.8, p < .001]), and philosophers and non-philosophers mean ratings were very similar.
Perhaps more interesting than mean ratings, though, are equivalency ratings: How likely were respondents to rate scenario pairs equivalently? The Loop case is subtly different from the Switch case: Arguably, in Loop but not Switch, the man's death is a means or cause of saving the five, as opposed to a merely foreseen side effect of an action that saves the five. Might philosophers care about this subtle difference more than non-philosophers? Likewise, the Drop case is different from the Push case, in that Push but not Drop requires proximity and physical contact. If that difference in physical contact is morally irrelevant, might philosophers be more likely to appreciate that fact and rate the scenarios equivalently?
In fact, the majority of participants rated all the scenarios exactly the same -- and philosophers were no less likely to do so than non-philosophers: 63% of philosophers gave identical ratings to all four scenarios, vs. 58% of non-philosophers (Z = 1.2, p = .23).
I find this somewhat odd. To me, it seems pretty flat-footed a form of consequentialism that says that Push is not morally worse than Switch. But I find that my judgment on the matter swims around a bit, so maybe I'm wrong. In any case, it's interesting to see both philosophers and non-philosophers seeming to reject the standard orthodox view, and at very similar rates.
How about Switch vs. Loop? Again, we found no difference in equivalency ratings between philosophers and non-philosophers: 83% of both groups rated the scenarios equivalently (Z = 0.0, p = .98).
However, philosophers were more likely than non-philosophers to rate Push and Drop equivalently: 83% of philosophers did, vs. 73% of non-philosophers (Z = 3.4, p = .001; 87% vs. 77% if we exclude participants who rated Drop worse than Push).
Here's another interesting result. Near the end of the study we asked whether it was worse to kill someone as a means of saving others than to kill someone as a side-effect of saving others -- one way of setting up the famous Doctrine of the Double Effect, which is often evoked to defend the view that Push is worse than Switch (in Push, the one person's death is arguably the means of saving the other five, in Switch the death is only a foreseen side-effect of the action that saves the five). Loop is interesting in part because although superficially similar to Switch, if the one person's death is the means of saving the five, then maybe the case is more morally similar to Push than to Switch (see Otsuka 2008). However, only 18% of the philosophers who said it was worse to kill as a means of saving others rated Loop worse than Switch.
Just curious - did you test for effects with *moral* philosophers (as opposed to just philosophers)? I would be curious to see what the results there would be (I, for one, always judge trolley cases identically).
Posted by: Marcus Arvan | 12/14/2013 at 07:10 PM
Wicked cool.
Posted by: jonathan weinberg | 12/18/2013 at 04:27 PM
Thanks, Jonathan!
Marcus: Yes, we do have a separately analyzable subset of Ethics PhD's. 64% of the ethics PhD's rated all four scenarios equivalently, which is not statistically detectably different from the 58% of non-philosophers (47/73 vs. 390/669, Z = 1.0 p = .30) and very close to the 62% for non-ethicist philosophers. With our smallish number of Ethics PhDs, the data are somewhat underpowered, but here as elsewhere in our studies we haven't found much evidence of ethicists respondingly differently from other philosophers.
Posted by: Eric Schwitzgebel | 12/18/2013 at 07:32 PM
Interesting, Eric - thanks!
Posted by: Marcus Arvan | 12/19/2013 at 12:19 PM
The thing that worries me about these problems is that are nothing more than gambling and counting. Hume is my authority on being unable to foretell the future, if I need one. If anyone wants to take responsibility for fortune telling and take a life, that's their choice. These experiments are based on gambling, so present them with that proviso so people are clear that they must first trust their own or someone else's calculation of "certainty" of disaster, which may be inaccurate and thus a real disaster.
You may say I miss the point, but you miss reality if you do not accept my point, and you fail to set up the experiment properly if you do not make the gambling aspect clear rather than skirting over it to focus on the aspect of decision. Decision are based on something, and you have left out a key element. Do not skip the proviso, or people will become mindlessly accustomed to fortune telling. If you would like to read more of my ideas on these and related matters, my free book is at thehumandesign.net (design to the laws of nature, not god).
Posted by: Marcus Morgan | 01/08/2014 at 01:45 AM