Blog Coordinator

X-Phi Grad Programs

« Bibliography of x-phi papers | Main | CFA Experimental Philosophy as Applied Philosophy »

11/18/2015

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Josh May

Thanks for bringing up this topic in a great post, Wesley! I'm perhaps not as concerned about this issue, for several reasons that I'll try to quickly sketch below. However, I'd like to emphasize that I am a *bit* worried and I of course welcome ideas for how to improve.

(1) Some people have taken steps to control for such issues, and not just in the last year or two. Josh Greene and his collaborators, for example, have actually taken some steps toward improvement. In their "Pushing Moral Buttons" (2009) paper, they test for what they call "unconscious realism"---"a tendency to unconsciously replace a moral dilemma’s unrealistic assumptions with more realistic ones" (p. 365).

https://dash.harvard.edu/bitstream/handle/1/4264763/Greene_MoralButtons.pdf?sequence=2

Here's an example: "Subjects estimated the likelihood (0–100%) that the consequences of Joe’s action would be (a) as described in the dilemma (five lives saved at the cost of one), (b) worse than this, or (c) better than this." (p. 366).

(2) As you say, another way to improve is to come up with more realistic cases in the first place. And some have already been doing this yet finding similar results. I agree that Miller et al (2014) is an excellent example. Those sorts of examples make me less worried that more realistic cases in other areas will yield substantially different results. Of course, they might, but this makes me less worried. The worry about ecological validity, after all, is ultimately an empirical one. So it doesn't make a lot of sense to me to treat it as a major problem in a particular area without data suggesting it is.

Wesley Buckwalter

Hey Josh,

I think you make a really good point about the advancements Greene and collaborators have made to help improve this in that particular set of experiments. At the same time, it’s hard not to still have a lot of questions about what is going on in trolley processing more generally, if as it has recently been suggested, answers don’t end up measuring commitment to consequentialism. The study by Miller et al 2014 is one of my favorites, but it is insufficient for me to make any inferences at all that realistic cases in other areas will or will not yield substantially different results. And of course, recent data do suggest this issue could be a big problem for other areas of research, such as in free will. I take these two examples to show that how and when this is an issue will vary with specific thought experiments and judgments -- which worries me because we have so many in philosophy and there is basically no procedure or limits to constructing them to try and get the answer you want.

Eddy Nahmias

Hi Wesley, great post. This is a problem I've worried about a lot and dealt with from both sides of the issue, as it were. I have some old studies, done with Bradley Thomas and Dylan Murray, that we never got around to publishing, on trolley problems. We tried to create more realistic cases pushing cases (using automatic braking systems) and less realistic switch cases (using weird loops), hoping to show that people (a) would make believability judgments and (more importantly, I think), judgments about the agent's being *justified* in believing the action would work to save net-4 lives, and (b) would make moral permissibility judgments that tracked their epistemic judgments (and hence would go way up for the pushing case and way down for the switch case).

Instead what we found was that people's epistemic judgments seemed to track their moral judgments. That is, they still found the pushing case equally impermissible and also judged it unbelievable (and vice versa for switch), controlling for order of questions. It might seem, then, that our cases just failed in being more (or less) believable. But when we took out the moral features (by making it crash test dummies and bags of luggage), the believability ratings did move in the right directions.

There's several interpretations of these results, but one is that people are rationalizing their moral judgments with their epistemic judgments (might also involve higher stakes increasing epistemic standards).

You also discuss my collaborators' and my determinism and neuro-prediction cases. I share the worry that some participants may be rejecting the stipulations of the case because of implicit or explicit commitments to indeterministic, libertarian, or dualist beliefs, and I hope we can find ways to better test for this possibility (as you know, I don't think the ways you, Rose & Nichols tested it effectively shows that most people are 'intruding' libertarian commitments into the scenarios). It's important to note that when we explicitly ask people whether it is possible for the neuro-prediction technology (allowing perfect prediction of decisions and actions based on prior brain activity) to exist in the future, 80% said yes. And then when we ask why or why not, we found almost no one saying that it is impossible because humans have free will (or a non-physical soul or indeterminism is true), as it seems more people would if they were committed to such beliefs. Instead, the 20% who say it's impossible talk about breakdowns in the technology or moral or political constraints on developing it or the complexity of the human brain.

Finally, I've long thought the thought experiments used in phil mind to challenge physicalism and/or functionalism are really problematic for the reasons you suggest. My former student Toni Adleberg did a great thesis on this issue, suggesting that the best explanation of these thought experiments involve conflicts between our agency detection mechanisms (or maybe theory of mind) and our physical/mechanistic explanation systems. The work by Tony Jack and collaborators suggests something similar (as does some work by Brian Talbot). I think something similar accounts for some of the results in the free will x-phi.

John Turri

Great to see this being discussed! Just some general observations, in response to the closing questions of the OP:

Unnecessarily long, complicated, or unrealistic materials threaten to cause trouble in many ways. At the very least, they raise questions about reliability and external validity. Assuming that you're measuring what you want to, do you want to decrease random error in the measurement? Shorter and simpler materials decrease random error by minimizing opportunity for distraction, fatigue, and confusion, among other things. Do the findings generalize to other contexts, especially everyday situations where the concepts have their home? More realistic materials increase the extent to which this is true.

Manipulation checks are extremely helpful and can usually be included in a way that doesn't influence people's response on the variable of interest. You spend time crafting short, simple, realistic and tightly matched stimuli that, on the face of it, effectively manipulate the independent variable. But, however plausible this seems to you, and however silly it would be for people to not get it, it's wise to check that people understood things in the intended way. If someone — say, a reviewer, maybe? — asks whether your manipulation was effective, you can plead plausibility. Or you can just let the data do the talking. The latter is definitely preferable, but it's possible only if you have the data.

Just my $.02!

The comments to this entry are closed.