Blog Coordinator

X-Phi Grad Programs

« Directory of Women Philosophers | Main | More work on motivational internalism – and some questions »

01/21/2014

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Josh May

Allow me to kick off the discussion by saying, this is a really great paper! That's all I really have to say.

Gunnar Björnsson

I have found your work on the relation between attributions of knowledge and belief really interesting. This looks interesting too, but I do have a few worries; let me mention two.

First, and perhaps most importantly, I don’t yet quite see how this is testing for internalist intuitions. Most people in the internalist debate – internalists and externalists alike – think that people in general have strong default expectations that belief that one (morally, rationally) ought to do something will come with some motivation. Suppose that this is right. Then if A is told that B lacks all motivation to do a certain thing, we should expect A not to attribute to B the belief that B ought to do it unless A has strong positive evidence that B has the belief in question. This, I think, is common ground between internalists and externalists. But now I am a little bit unsure why anything more than this is needed to explain subjects’ reluctance to attribute ‘thick’ moral belief to agents without motivation.

It seems to me that for subjects’ reluctance to attribute moral belief to reveal internalist inclinations, the cases in question would have to be ones where motivation is clearly absent but all other evidence would have suggested that the agent in question has the belief. Such cases should thus strongly indicate that the agent has the relevant purely cognitive (non-motivational) dispositions associated with moral belief (in particular the disposition to make an explicit judgment naturally expressed as “I ought to do such-and-such"). But I don’t see anything in your vignettes that does this. Am I missing something? (My colleagues John Eriksson, Caj Strandberg, Ragnar Francén Olinder, Fredrik Björklund and I try one way of avoiding this problem in a paper forthcoming in Philosophical Psychology, “Motivational Internalism and Folk Intuitions, https://www.academia.edu/5823340/Motivational_Internalism_and_Folk_Intuitions.)

If subjects withhold attributions of thick belief because they operate with a default expectation that moral belief comes with motivation, then this withholding provides no evidence that they are intuitive INTERNALISTS about THICK moral belief. A second worry concerns attributions of thin belief. Here, I am not so sure that subjects’ willingness to say that the agents in question “think at least on some level that they ought to this-or-that” provides evidence that subjects are intuitive EXTERNALISTS about THIN moral belief. Here I worry that this sort of question isn’t tracking attributions of thin belief in the sense talked about by Dretske, i.e. as a mere holding true. For example, subjects might be attributing “thoughts that this-or-that on at least some level” because agents in these scenarios used to have the beliefs in question and so might still be able to take their old point of view and think their old thoughts (on a nostalgic level, as it were). But agents can do this without therefore actually and presently holding their old thoughts to be true. So the question is why we should think that the question here actually tracks thin belief (in the Dretske sense)?

Wesley Buckwalter

Hey Josh,

Thanks for the positive review!

Hey Gunnar,

Thanks for sharing these worries about the way the paper goes about testing support for internalism and externalism.

For those who haven't read it, the basic idea of the paper is that if we present people with some cases that have been central to the debate about moral motivation, like Michael Stocker’s “Jaded Politician”, but change the way we question them about that case slightly, we see big differences in their intuitions about that character's beliefs. So for instance, one adaption involved a case concerning lying. Specifically, it featured an employee of a sandwich shop who is an extremely conscientious and honest worker, but who slowly became jaded after many years of service. At the end of the story, it’s made clear that he has no motivation to tell the truth, and in fact, doesn't tell the truth, even though other evidence suggests he should (for instance, “He remembers his past and all the reasons why he should be honest.”)

In this case participants agree that he has no motivation to tell the truth. But then, they differ with respect to his moral beliefs. When they are asked in terms of thin belief (or whether he ‘thinks” he should tell the truth) they say “Yes”. Then when they are asked in terms of thick belief (or whether he “believes” he should tell the truth) they say “No”. We take the former as an intuition against motivational internalism (since it's exactly what’s supposed to be an impossible situation if many versions of motivational internalism are correct). We take the latter as an intuition supporting motivational internalism, (since switching to the thick belief probing completely reverses views from the thin belief case, with the majority now withholding belief).

In other words, it looks like you can flip the intuitions predicted by either view, we argue, by cuing one of these two different conceptions of belief.

I should also mention that while we saw this effect in a number of different contexts, we also saw some variation in belief judgments depending on the type of belief featured. For instance, people were more likely to lean externalist in cases involving charity than lying. We speculate about why this might be in the paper. But overall I think it really emphasizes the need to use a variety of judgments when testing intuitive support for these views. Our point is that over a variety of contexts, the question of intuitive support might really come down to a question about conceptions of belief.

The comments to this entry are closed.