Blog Coordinator

X-Phi Grad Programs

« Loeb on Deutsch on Intuitions! | Main | The Folk Concept(s) of Choice »

09/05/2016

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Aaron A

I'm not convinced that the additional options are totally irrelevant. For example: If one accepts the doctrine of double effect, then option 3 is more permissible than options 1 or 2. This is because options 1 and 2 both use people as a means to stop the trolley, but in option 3 the three people die as a side effect of using the second trolley as a means. So, if the authors counted option 3 as "pushing" in their analysis, that could partly explain why more philosophers chose "push" when there were six options rather than just 2: Adding option 3 adds a permissible "pushing" option that was not present when there were only two options.

I don't think this totally undermines the results--it doesn't even touch the ordering effects--but it does suggest the need for cleaner cases.

Alex Wiegmann

Hi Aaron,

Thanks for your comment. I agree that the 'intermediate' options are not per se irrelevant. For instance, Francis Kamm argues that the right thing to do is to redirect the speed train towards the five people. In our analysis, however, we only considered the two options that are available in both scenarios (push the one man vs do nothing; the other, intermediate options were chosen by only a few subjects). And with regards to these two options, the other four options are irrelevant, I think.
Cheers
Alex

hjup

With regard to the two original options, the additional four options are irrelevant - you think.
Well, apparently many philosophers thought they weren't. Why should I trust your intuition but not theirs?

Alex Wiegmann

Hi hjup,

Thanks for commenting! I do not believe that many philosophers thought that the additional options were relevant with regards to the two original options. The data only shows that they were affected by this factor. Similarly, Schwitzgebel and Cushmann showed that philosophers were affected by the order of presentation of moral dilemmas but I do not know of any philosopher who claims that the order of presentation ought to affect moral judgments (we obtained an order effect in our study, too).
Imagine you can choose between two ice creams, chocolate and vanilla, and you prefer and choose vanilla. It seems strange to me if you would choose chocolate if we add strawberry as an option.
There are cases where it is rational to change your preference concerning two options if other options are added (hands of poker, betting on horses, etc.) but I cannot see how this could be true in our study. If you or anyone can offer an explanation why the additional options in our six-option scenario ought to affect the choice regarding the two options that are available in both scenarios, please tell us!

Cheers
Alex

Bryan Frances

I know nothing about this stuff. So feel free to ignore. But won't contrastivism have something to say here, suitably generalized?

Joachim Horvath

Hi Bryan, to some extent it might - but if we only focus on the two "extreme" options of pushing the heavy worker and doing nothing, it still seems to be the same contrast in both the Two Options and the Six Options case. But maybe I've just misunderstood the way in which you thought that contrastivism might be of help here...

Pascale W

Hey Joachim and Alex,

great job, really cool results! Maybe it is because I'm a philosopher, but I don't think your results are all that bad for philosophers. So maybe I can challenge you and try to interpret the results in a more cheerful way.

My idea is this:
I don't believe that the additional 4 options are irrelevant. While options 1 and 6 are clearly in favour of one ethical position (utilitarianism and deontology), the options in between should be sub-optimal for both ethical positions. So maybe this is what you mean when you classify them as irrelevant.
However, I could imagine two ways in which people might learn something from those additional four options:
1.) It might be that when reading the four options, people become more and more aware of the outcome, namely people dying. The fact that people die seems to me the only constant in all options. So what these options do is basically to make you more and more aware of the terrible thing that’s going to happen: people will die. Reading about more and more cases in which people die might trigger a higher willingness to save as many people as you can. You could either interpret this as an emotional reaction (subjects are more horrified and motivated to avoid the bad outcome), or to happen at a higher level of cognition (subjects become aware of what’s at stake and are better able to focus on this). Would be interesting to test…
2.) Alternatively, I could imagine that reading through all the options, people become more and more aware that in increase in aversiveness of an action is accompanied by a much higher increase in usefulness. So in the sense of a slippery slope, if you are willing redirect the train and kill 5 people as a side-effect, why not also go a step further and use 4 people as a means to saving the nine others? One person less dead...

Why do these two options show something positive about philosophers? The data show that philosophers are, if at least one of the options I mention is correct, much more sensitive to such additional information. Once philosophers are provided with additional information, it gets integrated into their intuitions, leading to a more nuanced judgment about what to do.
This interpretation would explain both why philosophical intuitions differ much stronger between the two and the six option condition, and also why they show stronger order effects.

So what can this show about the expertise view? Maybe being an expert intuiter is not having robust intuitions, but rather the opposite: making use of new information and adapt your intuitions accordingly. Different from laypeople, philosophers use all information relevant and available to them and, thus, we can trust that their intuitions are much better informed compared to those of laypeople.

I look forward to your opinion on that!

Steve F

To show an irrelevant options effect, you need to show a difference in preference between the same two options for the 2 and 6 option case. But it's unclear here whether "push" in your chart represents just option 1, or options 1-5. I assume it must be the latter, 1-5 (100% of respondents chose either option 1 or 6? Implausible). In that case, you haven't shown an irrelevant options effect, just that a number of philosophers thought that one of the additional options was relevant (as defended by hjup and Aaron).

Steve F

...plausibly I might go for option 3 myself.

Alex Wiegmann

Hi Steve,

Thanks for pointing out some ambiguities! (I changed the post accordingly). You write "To show an irrelevant options effect, you need to show a difference in preference between the same two options for the 2 and 6 option case.". That is exactly what we found in the data. Furthermore, "But it's unclear here whether "push" in your chart represents just option 1, or options 1-5. I assume it must be the latter, 1-5 (100% of respondents chose either option 1 or 6? Implausible)." Sorry for the misunderstanding. Just option 1 counts as "push". For Six Options, only the two “extreme” options that are also available in Two Options (doing nothing and push) are considered in the figure (the intermediate options were chosen by ~20% of experts and 10% of lay people; the post is changed accordingly). And I agree with you: if you choose to do nothing in Two Options and choose option 3 in Six Options, then there is no inconsistency.

Cheers

Alex

Steve F

Thanks for the clarification. These results are interesting.

Steve F

Alex, another thought: mightn't it just be that more of the people who prefer not doing anything to pushing also prefer one of options 2-5 to either pushing or not doing anything? That makes sense: if you favor pushing then you're probably a straightforward consequentialist, in which case the other options with nonoptimal consequences won't be appealing, whereas if you favor not doing anything then you probably accept some kind of deontic side constraints, and may well also accept the principle of double effect (and may therefore prefer option 3 to not doing anything). To put it differently, more "do nothing" people will defect when given the other options than "push" people will.

If so, this would result in a higher percentage of "push" (out of "push" and "do nothing" answers) for the 6 option case than for the 2 option case, without any irrelevant option effect. What we would need to know, to diagnose such an effect, is whether the percentage of "push" answers out of ALL answers increases when we go from 2 to 6 options. Your chart doesn't tell us whether this is the case.

Steve F

...sorry, that was a bit hasty. If I'm not mistaken, we can infer that 53% (66% x 80%) of all respondents chose "push" in the 6 option case, which is significantly more than the 32% who chose "push" in the 2 option case. But still, a less dramatic effect than your chart suggests. (So perhaps 21% of expert respondents have egg on their faces...)

Clayton

Hi All,

Tried to post earlier, but it didn't seem to work. I might have been one of the participants (don't know if it was this poll or a similar one) but I know that when I answered a similar question, my answer was based on a kind of double effect reasoning. I guess I have concerns about this that are similar to those raised upthread. It would be interesting to see answers to follow up questions that probed for reasons (e.g., the role of intention or treating others as mere means).

Angra Mainyu

Hi Alex,

Leaving aside other issues (some of which I will address later), I think that order effects when it comes to difficult moral issues, on their own, are not a problem for the claim of expertise, because our intuitions sometimes change over time, after we think about the matters in question more carefully, consider analogies, talk to other people, etc., but that plausibly is because our preliminary intuitions were wrong, and after further consideration, we got better intuitions on the matter.
It might be that, after considering the six-options cases, philosophers got more scenarios and more experience thinking about matters that are related to the two-options scenario (such as the principles behind the choices), and then they made a better assessment (however, in this case, that seems less likely given that experts might be very familiar with these sort of scenarios; do you know how familiar they were with similar trolley/train problems?).
Ultimately, it seems to me the way to decide would be to first ascertain what the correct answer is, and then see whether philosophers improved or got worse. Of course, there will be disagreement on that issue, so that can be problematic, but isn't there disagreement on many other cases? Would it be a greater problem in this case?
In my assessment (with a caveat that I'll explain below), to do nothing is the morally better option of the two, by far. But it seems that when the 6-options alternative was presented first, the number of philosophers who picked the wrong option went up from 32% to 55% (if I'm reading the chart correctly; please let me know if I got that wrong), which is a serious problem.
The caveat is that I think the stipulations of the scenario may be unrealistic in a wrong kind of way.
More precisely, it's easy to imagine the trains, the people on the tracks, etc., and make a moral assessment on the basis of that, despite the fact that in real life we would not face a situation of that sort. So, that's unrealistic, but I don't think in a wrong way.
On the other hand, it might not be easy (or even psychologically doable) to rule out plenty of alternative courses of action that are not intuitively ruled out by the specific features of the scenario that we can imagine.
For example, I find myself assessing immediately that it's (very probably) epistemically irrational on Carl's part to conclude that the heavy worker would stop the train if pushed on the tracks. I reckon it's even more irrational to add to that assessment that Carl's ramming the train head-on running at full speed will not also have a good chance of stopping the train. Maybe that's so; maybe the worker somehow sticks to the ground so well that that makes a big difference, etc. But Carl very probably wouldn't be able to tell in so little time.
Moreover, assuming that there is a solution (e.g., Carl cannot get to the tracks in one piece and run), if the heavy worker actually has a reasonably high probability (on an epistemic probabilistic assessment correctly made by Carl on the basis of the info available to him) to stop the train, that probability will very likely rise significantly if both the heavy worker and Carl are on the train's path, and Carl surely can choose to push the heavy worker and then jump (which I think would still be immoral, but less so than just pushing the other guy, at least under some "all other things equal" condition that the scenario does not seem to exclude).
There are a number of other issues.
Granted, I can choose to consciously ignore all of that, and assume as the scenario says that the worker will stop Carl, etc., but I'm not at all sure that, at some unconscious level, I'm not still assessing that Carl is being very probably epistemically irrational. Maybe experts are better at avoiding that kind of unconscious assessment than I am, though I'm not convinced that that is plausible.

All that aside, it may well be that professional philosophers are not experts concerning the intuitive evaluation of thought experiment cases, when it comes to some (many) cases. After all, there seem to be deep disagreements among professional philosophers on such cases (e.g., abortion, meat eating, gun control, war, immigration, race, effective altruism, etc.). It seems we can establish that at least non-negligible percentages of professional philosophers are very wrong on a good number of moral matters, even if we don't know who's right (with regard to the data from your experiment, maybe most of the philosophers who disagree on whether it's morally better to push still agree that it's only slightly better or worse, so the disagreement is not so deep. Or maybe it is very deep, but we can't tell on the basis of the results, I think).

Joachim Horvath

Hi Pascale, thanks for your rich comments! I will only focus on one issue, which strikes me as the most important here: it just seems implausible that expert ethicists should learn something new from the additional options that would justify the reversal of choice patterns that we found. Remember that these are all people who mostly hold a PhD in philosophy and specialize in ethics. If you are an experienced ethicist, you've already seen many trolley cases and other thought experiments of this kind. So which relevant new information can you actually get from our additional options? That these are cases about people dying, and that this is a terrible thing, should be old news to the expert ethicist. Also, an expert ethicist should be able to handle the different degrees of aversiveness and weigh them, in moral terms, against the number of lives that is at stake in each option. You may be right that some of your suggestions can provide a psychological explanation for lay people's choices. However, I am skeptical that such an explanation would equally apply to expert ethicists (for the reasons given above) - but if it does, then the explanation would also seem to undermine their moral expertise in judging cases of this kind.

RH

Reading through the comments, I am still not sure I understand. Here is what I think I should understand:
The percentage figures in the chart express the following: /from those people who chose either "Push" or "Do Nothing"/, x percent chose the former, 100-x percent the latter?!
Now, /if/ that is on the right track (…), I do not think that the differences in results in the "Two Options" choice versus the "Six Options" choice show anything irrational/disturbing/etc. In sum, I do not think that the results show anything problematic, because - if the interpretation above is correct - introducing new options will make the group of people choosing either "Push" or "Do Nothing" smaller, such that the /percentage/ of those choosing "Push" might go up, even though their absolute number does not increase. Further, that people might change their minds if new options are introduced is not problematic, since the options are not irrelevant. Or so it seems to me.
To illustrate, let us assume there were 6 people in the study. 5 of them choose "Do nothing", 1 "Push" in the Two Options comparison. In terms of percentages this will (roughly) be 83% and 17%, respectively. Then further options are introduced. 1 person chooses "Do nothing", 1 "Push". In terms of percentages: 50% versus 50%. The other 4 participants now choose different options (for instance, the "classic" Trolley option, here Option 5). Finally, let us assume that 4 participants switched from "Do Nothing" to Option 5 because they are "deontologists". In the Two-Options case they chose "Do Nothing" because the only available means to the good of saving lives would have been unacceptable. By contrast, in the Six-Options-Scenario there was now available a means that is considered morally acceptable.
As mentioned, I do not see anything problematic about this. Picking up the ice-cream comparison in Alex' comment: choosing Chocolate instead of Vanilla when Strawberry is introduced as a further option is odd; but choosing Strawberry instead of Vanilla in the same scenario seems perfectly fine. And that's just what seems to have happened in the study.

Joachim Horvath

Hi Clayton, thanks for your interesting suggestion! One thing that I do not yet understand, however, is why and how it would matter for the issue of intuitive expertise to probe for reasons for making the relevant judgments. Do you have anything particular in mind? Apart from that, if these really are intuitive judgments, asking for reasons might be problematic in various ways, e.g.: Do we typically have consciously accessible reasons for our intuitive judgments? How reliable are we in citing the reasons we actually had vs. some kind of post-hoc rationalizations? etc. So there might also be methodological reasons against asking for reasons here (maybe Alex can say more here!).

Alex Wiegmann

Hello,
Reading through the comments, there might have been a misunderstanding. It is not the case that a lot of subjects changed their mind when they moved from TwoOptions to SixOptions (or vice versa). To the contrary, most subjects (philosophers in particular) chose the same option in both scenarios (one could say that this “consistency” caused the order effect).
So if we look at the two left bars where 32% chose the push-Option in TwoOptions and 66% in SixOptions, there is not a single subjects who is included in both bars. Let’s apply this to the ice-cream example. One group of philosophers, group A, goes to an ice-cream shop A* where you can only choose between Vanilla and Chocolate, and most people prefer and choose Vanilly. Another group, group B, goes to another Shop B* where you have three options – Vanilly, Chocolate, and Strawberry – and here most people choose Chocolate (and only about 20% choose Strawberry). If after having eaten the first ice-cream group A now goes to shop B* and group B goes to shop A*, most people will make the same choice as before (neglecting the fact that with regards to ice-cream most people like variety).

@Steve You make a very good point. One might complain that the intermediate options (2-5) should fall into the same category as “Do nothing” (because, for example, they can be justified with the principle of double effect – though I doubt that with regards to at least option 2). We did not count the intermediate options at all, neither as “Do nothing” nor as “Push”. If you want to add them to either category, you need something like a cut-off criterion that defines which options “stand in contrast” to each other. If you think, for example, that action vs omission was the crucial criterion, you would add all the intermediate options to the “Push”-category, making the difference between TwoOptions and SixOptions even bigger; if you believe that options 2-6 fall into one category and all off them stand in contrast to option 1 (the “push”-option), then you should add all intermediate options to the “Do Nothing” category, making the effect smaller. The “good thing” is that the intermediate options were only chosen by a minority and no matter whether or how you count the intermediate options, the effect is still significant.

@Clayton and Angra, I will reply later (have to leave now)

Alex Wiegmann

@ Angra. I agree that we should not expect Philosophers to be immune to order effects or other biases and I also think that such results should make us too skeptical. However, according to the expertise defense we should expect Philosophers to be less susceptible than lay people – and this was not the case.
Here is a study that might suggest that philosophical training indeed has positive results. They found that philosophy students did not accept pseudo-explanations (circular etc.) – in contrast to other students. Hopkins, E. J., Weisberg, D. S., & Taylor, J. C. (2016). The seductive allure is a reductive allure: People prefer scientific explanations that contain logically irrelevant reductive information. Cognition, 155, 67-76.
@Clayton. Interesting suggestion. We did not do it this time, but I think Schwitzgebel and Cushman did something in this vein.

The comments to this entry are closed.