Blog Coordinator

X-Phi Grad Programs

« In the Thick of Moral Motivation | Main | Mapping Human Values »

01/27/2014

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

John Turri

Hi Gunnar,

I was unable to understand your "worries" about how my and Wesley's results supported our conclusions. Could you maybe try to re-state the worries, so that I might benefit from your perspective?

Thanks,
John

Gunnar Björnsson

Hi John,

Apologies for being unclear. I'll see if I can do better this time.

I mentioned two worries.

The first concerns what sort of connection between moral belief and motivation you test for. Motivational internalism (in its simplest forms) postulates that, by conceptual necessity, whenever there is a moral belief of the relevant kind, there is corresponding motivation. But whereas externalists deny this, they agree that we can *generally expect* people to be motivated to to what they think that they do (though they insist, of course, that there are or can be exceptions to this general tendency). If we want to check for intuitions resulting from an internalist concept of moral belief, we thus need to be sure that we are not just testing for intuitions resulting from this general expectation. My worry is that your vignettes plus questions fail to distinguish between these two kinds of intuitions.

You present a case with an agent who lacks motivation to φ and ask subjects whether that agent believes that he ought to φ (or whether, on some level, he thinks that he ought to). If I understand you correctly, you want to take reluctance to answer these questions in the positive to indicate internalist commitments on part of subjects, and your suggestion is that subjects are (more) internalist about thick belief than about thin belief. But how should a subject answer that question given that she has the general expectation that moral judgments come with motivation? Well, given this expectation, she should take absence of motivation to provide (non-conclusive) evidence of absent belief. Whether people have internalist commitments or not, this expectation seems to be enough to generate reluctance to attribute belief in the cases you present. If so, such reluctance doesn't tell us that the subject has internalist commitments. That's the first worry.

The second worry (which might be less serious) concerned what the prompt for thin belief was actually tracking. The way you describe thin belief, with reference to Dretske, suggests that it involves holding true the relevant proposition. My worry is that talk about what someone thinks "at least on some level" need not be tracking what the person holds true, and so need not be tracking attributions of thin belief. In the cases you presented, the agents had at some point been motivated to do the right thing and presumably at that time thought that the action in question was the right thing to do. When I am thinking about whether, "at least on some level," such an agent thinks that he ought to do this, what springs to mind (insofar as my introspection is reliable here) is the agent's historical (thick) commitments and his memories of these, rather than what the agent presently holds true in the thin sense. (A related worry about belief reports comes from our study, where people were quite willing to attribute moral belief in explicit inverted commas cases. But for me at least, the existence of a pre-history of moral belief introduces a specific worry about attributions of thin belief, a worry that doesn't figure in the same way with respect to your knowledge-belief study.)

Please let me know if this makes my worries any clearer, or let me know where I lose you.

John Turri

Hello Gunnar,

Thanks very much. That does help. I do believe that both can be addressed by the data on hand.

Regarding the first point, it's worth emphasizing that we do more than present a case where the agent is described as lacking the relevant motivation. In addition to that, we're reporting results from people who pass the comprehension checks. So these are people who answer that this specific agent "no longer has any motivation" to perform the relevant action. And we observe a significant difference in attribution for thick/thin probes. This is not plausibly due to some generic expectation that, for some reason, swamps the specific information that participants themselves not only have but are actively using to answer questions. To come at it from a slightly different angle, consider that just about everyone passed the comprehension checks. If the generic information was driving participant performance, lots of people would be failing this comp check. But that's not what happened.

Regarding the second point, I can see how there might be something to that concern in relation to these specific scenarios, but this is actually not really a problem for us, for two reasons. First, we used stimuli that had a historical dimension precisely because this is where much of the critical discussion in the literature has focused. We ourselves note that the historical dimension seems to be relevant, not only in the abstract but in light of interestingly different patterns across some of our studies. We even propose a hypothesis to explain this difference, in terms of perfect versus imperfect duties, as they relate to past performance. Second, and more importantly, we know from other work that the historical dimension isn't really what drives thin-belief attribution. Besides, even granting that there is a bit of that going on, it can't explain the results. For instance, it could not explain the extremely large difference in attribution between Thin Liar and Thick Liar in Experiment 3.

Gunnar Björnsson

Thanks John, that helps me see where I am not making myself understood.

So, to clarify: My first worry doesn't concern the difference you get between thin/thick probes. Clearly those probes probe for different things – thus far I am on board. Instead, the worry concerns whether the probes are probes for *internalist* intuitions (i.e. intuitions explained by the existence of a *necessary* link between moral belief and motivation, as opposed to a reliable but non-necessary tendency for moral beliefs to come with motivation). Much of the metaethical debate about internalism has been concerned with understanding which of these two sorts of connections obtain between moral judgment and motivation, so from the point of view of that debate, this is a crucial distinction.

Regarding the second worry, it again doesn't concern the difference you see between thick and thin probes. Instead, it concerns whether your probes are tracking attributions of thin belief in the Dretske sense, or perhaps tracks something else. If I understand the way you draw the distinction between thin and thick belief, if concerns two different kinds of attitudes one can have to a content. Thin belief is a mere holding true, whereas thick belief involves more, in particular dispositions to act on the belief. Generally, my worry is that the use of the weakening qualified "on some level" opens up for ways of thinking that do not involve straightforwardly holding true the content in question.

In the cases you use in these experiments, it seemed to me that it might be tracking a historical, perhaps nostalgic, way for the agent of thinking about his previous (thick) commitments. Since you bring up the case of thin and thick liar, I should say that my worry there is more that the content believed changes between probes for thin and thick belief: that in the thick case, Michael is understood as not thinking that he ought to tell the truth *all things considered* (because he has been treated badly); in the thin case, he is understood as thinking that as an employee, he has a *prima facie* obligation to tell his employer the truth about how much overtime he works (but that because of the lack of respect on part of the employer, this is not an obligation all things considered).

For all these cases, though, the worry is that thin/thick probes fail to track attributions of think/thick beliefs with the same contents: the thin probe might not track a holding true, or might track a holding true of something other than what the thick probe is tracking.

(By the way, it's getting late in this time zone, so apologies in advance if I will be slow in moderating comments for the next few hours.)

John Turri

Hi Gunnar,

Please bear in mind that we're operating under the general assumption that, as Hume proposed, a long-enduring, purportedly conceptual dispute between highly intelligent parties is probably best explained by ambiguity. That is, the two sides are presumably talking past one another.

If people attribute moral belief despite attributing absolutely zero motivation, then they're doing something that internalists reject. At that point, internalists will presumably either say that they're talking about belief in some other sense, or accuse people of fumbling this mindreading task. We offer internalists a way of taking the first, less drastic alternative.

By contrast, if people refrain from attributing moral belief when attributing zero motivation, then they're doing something that internalists can well explain, in terms of a basic conceptual connection between moral belief and motivation.

The "on some level" phrase is intended primarily to allow for belief that is not conscious or occurrent. One purely speculative explanation –– not consistent with numerous other studies across several other papers on thin/thick belief –– is that people are attributing thin belief in some nonliteral or purely aspirational way. Another explanation is, as we propose, that this probe cues a category of belief that externalists have had in mind all along (i.e., thin belief). The latter explanation has the advantage of not only cohering with a considerable body of prior findings, it also allows for a charitable resolution of the internalist/externalist debate.

Gunnar Björnsson

Right; I get the proposal that the debate is explained in ambiguity, and various internalists (including high-profile non-cognitivists) have similarly thought that externalism is true about various (secondary or derived, they would say) kinds of moral judgment or belief. And I sympathise with the idea that enduring purportedly conceptual disputes between highly intelligent parties might be best explained by ambiguity, though the fact that intelligent parties take it to be a dispute might also suggest that it is indeed a dispute. (My former student Ragnar Francén Olinder http://goo.gl/vTwz7i has done highly interesting work on the ambiguity line, much of my work on issues of disagreement has been geared towards showing how it might make sense even if parties operate with different concepts, and my own preferred way http://goo.gl/N9ohVb of accounting for the connection between moral judgment and motivation acknowledges that some judgments unaccompanied by motivation might sensibly be understood as moral wrongness-judgments.) Indeed, one of my questions in this post was concerned with similarities in our results: we saw a difference between attributions of moral understanding and moral belief that seemed to match the difference you saw between answers to probes for thin and thick belief.

I also get that the ”on some level” locution was intended to allow for belief that is not conscious or occurrent. My second worry concerned whether this was all it would do when people are asked about whether agents who lack motivation to do something think that they ought to do it. I suggested that it might lead them to think about what the agent had historically believed, or to about what the agent believes about his prima facie duties. You say that this is purely speculative and inconsistent with other studies of thin/thick belief. I agree that this is speculative: it is not based on specific studies, but primarily on my own interpretation of the questions. Still, I wonder what reason we have to think that subjects understand the questions in the way you intended. That’s what I have been asking for.

Now I understand you as answering that this interpretation, unlike the interpretations I have proposed as possibilities, coheres with or are consistent with prior studies. But I wonder about that. In fact, it seems to me that my proposals are as consistent with earlier results as the ”holding true” proposal.

Here’s why I think this. Suppose that I’m asked whether, *at least on some level*, Agent believes or thinks that P. The extra locution clearly allows for positive answers in more cases than the plain question of whether Agent believes or thinks that P. But what sort of alternative interpretations are likely to come into mind? Talk about ”levels” doesn’t have any obvious content here, so we can expect context to do quite some work in pointing us to what might not be adequately unqualifiedly described as believing or thinking that P, but is suitably closely related. To see whether it points us to thin belief in the Dretske sense, we need to look at cases.

Start with an example from one of the prior studies that you take to support your interpretation of ascriptions of thin belief. Here, the context is one where Agent holds, on her parents authority, that the earth is at the center of the of universe but has been a good student and writes on the physics exam that the earth revolves around the sun. A large majority of subjects were willing to say that *on some level*, Agent thinks that the earth revolves around the sun, and this might seem like a plausible thing to say in light of the fact that Agent knows that this is what physics says. But I don’t see that in this is a clear case where Agent thinks that the earth revolves around the sun *in the Dretske sense of merely holding true*. Perhaps she thinks this ”on some level” in the sense that she suspects that it might be true, or in the sense that she accepts that it is supported by scientific evidence, or in the sense that she has received information to this effect and is capable of acting on it in the present (exam) context, or in the inverted commas sense that she thinks that this is what science says. Just as in the case of moral belief, I don’t see why we should assume that subjects who answer the thin belief probe in the positive attribute thin belief in the Dretske sense (assuming that I have understood what this sense is).

Likewise for the Dog case from ”Belief through thick and thin”, where a dog can respond to basic arithmetic questions by barking the right number of times: I don’t see why we should think that subjects who attribute thin belief that 2 + 2 = 4 are attributing thin belief in the the Dretske sense in particular, rather than, say, a disposition to reliably act, under constrained circumstances, as if believing that 2 + 2 = 4. (Perhaps one would want to say that having a reliable disposition to act, under certain very constrained circumstances, as if believing that P *is* a kind of holding true that P. But then externalism about thin moral belief would fall far short of what externalists want and what internalists are eager to deny, and it is unclear why empirical investigations would be needed: everyone agrees that people might be disposed to behave *in some ways* as if having moral beliefs without having the corresponding motivation.)

Notice that I’m not denying that there might be cases where the Dretske interpretation is exactly right, nor am I completely ruling out that it is the right interpretation of the moral belief cases. But I don’t yet see any reason to think that it is: the interpretation you propose still seems to me as speculative as the ones I have proposed, and not better supported by earlier studies.

Though my worries about the interpretation of responses to the ”thin belief” probe remain, I think that our remarks have now begun to connect. But I feel that we are still talking past each other in relation to the first worry, i.e. my worry that your probes fail to measure *internalist* intuitions. Perhaps it might be helpful for me to distinguish between two things to test for:

INTERNALIST INTUITIONS: Intuitions expressive of an understanding of moral belief on which it conceptually or metaphysically requires the presence of motivation.

EXTERNALIST INTUITIONS: Intuitions expressive of an understanding of moral belief on which it is conceptually and metaphysically compatible with the complete absence of motivation.

If people seem to attribute moral belief in a case where they clearly do not attribute motivation, this is prima facie evidence that they have externalist intuitions. Now, it might be that the belief in question isn’t the sort of belief that metaethicists have been concerned with – perhaps it is merely a form of inverted commas belief (a worry intensified by one of our studies) – and perhaps subjects do not really think that all motivation is absent in the sense of ”motivation” that internalists have had in mind (we go through some lengths in our study to rule out this worry).

These two worries, I think, needs to be taken seriously, but the worry that I have focused on doesn’t concern tests for externalist intuitions, but rather tests for *internalist* intuitions. The point I have been trying to make is that the mere fact that people withhold attributions of moral belief in a case where motivation is missing is not in itself evidence that people have internalist intuitions. Everyone, externalist and internalist, accepts that absence of motivation can be strong evidence that moral belief is absent. On an internalist view, the absence of motivation provides conclusive evidence of absent moral belief, but on the externalist view, it can be very strong evidence that the holding true or making a judgment part of moral belief is absent (as Svavarsdottir and other externalists have insisted in explaining away seemingly internalist intuitions). One way of trying to avoid this problem – the way we try in our studies – is to make it as explicit as possible in the vignette that there is some holding true or making judgment going on of the sort characteristic of moral belief and judgment, without prejudging whether it constitutes a moral belief. If this is indeed made clear and people still do not attribute moral belief in absence of motivation, it would appear that people do operate with an independent indefeasible requirement that moral belief is accompanied by motivation.

My worry, then, has been that your studies fail to avoid this problem. Of course, whether it is a problem for you depends on whether you take your results to be independent evidence of internalism about thick belief. Your last reply makes me think that maybe you are satisfied to show that some clear cases of externalist intuitions are tied to thin belief.

Derek Leben

Hello to Wesley, John, and Gunnar, I hope it’s ok if I jump into this debate.

To Wesley and John:

Your paper presents a really awesome result. I think that you’re both right about the thick/thin difference in our folk attributions of 'belief.' Whenever I teach an intro class on anything having to do with epistemology, I ask students to name some of their beliefs, and the answers are always religious or political beliefs, but never 'my car is green.' In fact, when I list 'today is Tuesday' as one of my beliefs, non-philosophers are always puzzled! I assume this is why it sounds strange to say that 'some/most of Hitler's beliefs were true,' since of course he did believe that the sky is blue and that the sun is the center of the solar system, but I'd wager that most lay-people use 'belief' to describe pro-attitudes that (as you both describe) we like, feel strongly about, and actively promote. I also think that you've done a good job in demonstrating that thick/thin versions of 'belief' are playing an important role in the internalist/externalist divide.

However, if the disagreement between internalists and externalists is ‘merely verbal,’ then you would expect that the differences in internalist or externalist judgments come from a difference in wording (as you used in your experiment with ‘believe’ vs. ‘think at some level’) or a difference in information presented in the story. However, it seems like even when the wording and information is constant, there can be massive disagreement! For example, in studies like Strandberg and Bjorklund's (2012), they use a prompt that might suggest 'thin' belief (they ask whether an agent 'thinks' she has an obligation), but people give an almost even mix of internalist and externalist responses.

I think that identifying the thick/thin distinction alone does not fully explain the Internalism/externalism divide, it just pushes it back to the thick/thin divide, and we’re left asking why some people (when the question and information is constant) give a ‘thin belief response’ and others give a ‘thin belief response’. Of course, I think this is still making a really important discovery that the Internalism/externalism debate is more general than just moral belief attribution, but it doesn’t show that the dispute is merely verbal.

To Gunnar:

Thanks for your comments; I’d like to address some of these criticisms of my paper (with Kristine Wilckens) as well as Wesley and John’s paper. The first criticism you had for Wesley and John’s paper (which also applies to our paper) is that it does not make explicit enough the fact that the agent has no motivation at all. However, I think that John’s response is correct, that this was made explicit in the competence questions, which we also presented in our paper, that do explicitly force the reader to acknowledge a complete lack of any motivation.

Second, you mention that we ask about the ‘possibility’ of belief, which might confuse the matter, and I think this is a fair point. We have run alternate (unpublished) versions without the ‘possibility’ wording and gotten similar results. Anyway, if (as you say) you think that people are just attributing what beliefs they think the agent likely has, then the ‘possibility’ wording doesn’t matter at all, because whether an agent is likely to have a belief is what we're trying to ask about. As for the use of likert-responses, these were used to run statistical tests like correlations and mediation analyses between evaluation judgments and belief attributions which would have been impossible with a binary response design. We did run a yes/no version for our first experiment (and got the same effect), but not for the second one which you and your colleagues are interested in, so I don’t know if that would change the results or not.

As for why we got different results for the Nichols-style experiment, I think that there are a lot of differences between our versions of the experiment besides just the likert scale and the use of ‘possible,’ so it’s unsurprising that they turned out so differently! The most important difference is that in your experiments, Anna is a psychopath (though not called one in the story), and it’s described in great detail to the reader how her beliefs never influence her motivation. This is far greater detail than in Nichols’ experiment, and I assume that when people read ‘psychopath’ in his experiment they might have just interpreted this to mean ‘evil guy,’ which I would think is what most lay-people associate with that term. In Leben and Wilckens, both the characters were presumably normal people and no history is given for either of them. I think your results do point to some interesting possibilities that the agent’s mental states and the attributer’s evaluation may both play a role, and I’d actually like to continue this thought on another post where I describe some recent follow-up work I’ve done on our last experiment.

John Turri

Hi Gunnar,

I think you're really over-stating the interpretative worry here. "On some level" is a perfectly ordinary phrase. It's used in ordinary language and philosophers use it in their published work. I've never had any one ask me, "Now, what does that mean, 'On some level'?" Grant for the sake of argument that, in some contexts, it might end up "pointing" to something that, in the course of some ongoing debates, some philosophers won't want to call a kind of "belief" but other philosophers will. That would be a merely verbal dispute. But, then again, that's exactly our point.

As for what you say is involved in probing for "internalists" versus "externalist" intuitions, I would merely reiterate that the people in ours studies themselves agreed that the agent has no motivation to act. Basically everyone answered this way, so we obviously made it quite clear that the person lacks moral motivation. There is nothing else to do.

In cases where everyone agrees that the person lacks motivation, if people are still happy to attribute the moral attitude nonetheless, then either they're incompetently applying an internalist conception of moral motivation, or they're competently applying an externalist conception. The latter is obviously far more charitable and, thus, the presumptive explanation.

Wesley Buckwalter

Hey Gunnar,

Thanks for sharing so many thoughts about the paper and its relation to prior findings! Let me just quickly respond to what seems to be the most pressing worry about testing intuitive support for internalism. If I understand you correctly, I take your point that not just any old pattern of moral motivational/belief denial is going to be all that suggestive of ordinary support for internalism. Perhaps there’s some basic or non-revealing level of mind reading where such patterns wouldn’t be too illuminating. Avoiding this in tests, as you say, should include evidence of “holding true or making judgment going on of the sort characteristic of moral belief and judgment, without prejudging whether it constitutes a moral belief.”

Despite having modeling central cases in the literature, we were worried about this too. In addition to John’s comments above, I also think that this is partially addressed by one of our experimental controls (the elitist politician). Specifically, the huge differences between elitist and very jaded politician cases seem to show that information in the vignettes about the politician’s prior support and youthful activism does supply crucial “holding true” evidence needed for an illuminating test of internalist intuitions, and not applying terms in just an aspirational or projectivist way. (Incidentally, this comparison also really effectively demonstrates the thick/thin belief distinction, and rules out some worries about the nature of thin belief.)

Generally though, I will note that there is bound to be some trade-off here. In your studies, you go to tremendous efforts of both length and complexity to avoid your general worry. Of course, with length and complexity comes much greater risk of bias and interpretation. We opted for the somewhat simpler approach, above. Perhaps the strengths and weaknesses of both approaches offer some great convergent evidence for internalist intuitions. The only place we differ is that John and I think internalist intuitions don’t tell the full story of intuitive support.

Wesley

Gunnar Björnsson

Hi Derek, thanks for chiming in. We all seem to share the sense that everyday talk of belief is rich in some way, involves some sort of endorsement or commitment. Talk about someone’s belief or beliefs using the noun ”belief” seems especially prone to trigger the sorts of reactions you mention, and be typically restricted to matters political, moral, or religious. At the same time, attributions of belief using ”believes that” seems less confined. At least it is commonly used is in cases where we recognize uncertainty or controversy, as we might naturally say that someone believes that the Euro will survive the crisis, that the GOP will eventually accept gay marriage, or that Xabi Alonso has already decided to leave Real Madrid. My sense is that all these cases involve a personal commitment or endorsement, going beyond what is straightforwardly given by the evidence. Becoming clearer about what goes on here would be very helpful.

Also, thanks for addressing some of my worries. You are right that the control questions go some way to address worries about motivation, and I didn’t spell out the sort of worry I had in mind more specifically. The problem I am sometimes worrying about here is that the everyday notion of ”motivation” might not capture what at least some internalists have had in mind. Internalists have typically had in mind something than actually being or feeling moved to do something: the relevant state is one that will move one under normal circumstances (when one is thinking clearly, not afflicted by general listlessness, etc). But in everyday talk, saying that someone is motivated to do something typically has more implications, and I might even say, colloquially, that I have no motivation to do what I am currently doing. Of course, we did go to some lengths to make salient the complete absence of motivation even in weaker philosophical senses of motivation, and much of the disagreement seen in other studies remained, suggesting that simpler formulations might suffice for certain purposes at least.

It is interesting to hear that attributions of belief and questions about the possibility of belief yielded very similar results. Worries about this – worries that we would miss out on the relevant modal element of internalism – was what lead us to a design where we could hope to capture the modal element without actually using modal locutions. But there are some drawbacks with that design too, naturally: to make plausible that there is a non-defeasible requirement of motivation, we needed to make sure not only that motivation is absent but also that people get that the non-motivational features commonly associated with moral belief are in place. One might worry that this makes the vignette too complicated for people, or introduces other problems. We tried to control for some such problems, but there are no doubt more.

Gunnar Björnsson

Hi Derek, thanks for chiming in. We all seem to share the sense that everyday talk of belief is rich in some way, involves some sort of endorsement or commitment. Talk about someone’s belief or beliefs using the noun ”belief” seems especially prone to trigger the sorts of reactions you mention, and be typically restricted to matters political, moral, or religious. At the same time, attributions of belief using ”believes that” seems less confined. At least it is commonly used is in cases where we recognize uncertainty or controversy, as we might naturally say that someone believes that the Euro will survive the crisis, that the GOP will eventually accept gay marriage, or that Xabi Alonso has already decided to leave Real Madrid. My sense is that all these cases involve a personal commitment or endorsement, going beyond what is straightforwardly given by the evidence. Becoming clearer about what goes on here would be very helpful.

Also, thanks for addressing some of my worries. You are right that the control questions go some way to address worries about motivation, and I didn’t spell out the sort of worry I had in mind more specifically. The problem I am sometimes worrying about here is that the everyday notion of ”motivation” might not capture what at least some internalists have had in mind. Internalists have typically had in mind something than actually being or feeling moved to do something: the relevant state is one that will move one under normal circumstances (when one is thinking clearly, not afflicted by general listlessness, etc). But in everyday talk, saying that someone is motivated to do something typically has more implications, and I might even say, colloquially, that I have no motivation to do what I am currently doing. Of course, we did go to some lengths to make salient the complete absence of motivation even in weaker philosophical senses of motivation, and much of the disagreement seen in other studies remained, suggesting that simpler formulations might suffice for certain purposes at least.

It is interesting to hear that attributions of belief and questions about the possibility of belief yielded very similar results. Worries about this – worries that we would miss out on the relevant modal element of internalism – was what lead us to a design where we could hope to capture the modal element without actually using modal locutions. But there are some drawbacks with that design too, naturally: to make plausible that there is a non-defeasible requirement of motivation, we needed to make sure not only that motivation is absent but also that people get that the non-motivational features commonly associated with moral belief are in place. One might worry that this makes the vignette too complicated for people, or introduces other problems. We tried to control for some such problems, but there are no doubt more.

Gunnar Björnsson

Hi John,

I don’t deny that ”on some level” is a perfectly ordinary phrase, or claim that it cause difficult interpretive problems. I think that when we use it, it will typically be *clear enough* in context what we have in mind. What I wonder is why I should think that, in the context where you use it, it carries the precise content needed for your thin belief probes to probe for thin belief in the specific sense that you I took you to be after: a mere holding true. The reason that I wondered was that, intuitively, in those contexts, the locution seemed to me to carry contents other than that. Since I found your claim really interesting – it would complement our findings in a really nice way and perhaps explain the difference in attributions of belief and understanding that we had come across – I hoped to hear more about why your proposed interpretation would be the one that most subjects went for.

But your latest answer suggests that you don’t care whether this is how subjects understand the locution. Then I have misunderstood the exact nature of your claim. Perhaps your claim is merely that when people are asked whether, at least on some level, an agent thinks that he ought to do something, the state of mind they will consider tends to be one that they take to be compatible with the absence of motivation: they are externalists about that state, whatever it is. Then that’s fine, though it seems to me that the implications of your results will be less clear in the absence of a clear account of what that state is.

Regarding the worry about missing motivation that I mentioned in passing in my previous reply to you, I tried to spell it out a little more in my comment to Derek (second paragraph). But I should add here that I think that your Very Jaded Politician case goes quite some way to address this worry.

Gunnar Björnsson

Hey Wesley, thanks for contributing! I also think that our studies are complementary – which is why I’m fretting over just how much I can take away from your study and how much it might leave open.

I think that responses to the Elitist Politician effectively deals with the sort of worry that you mention in the paper, i.e. the worry that on some level everyone thinks that we ought to help the poor. But it is less clear to me that the responses rule out that subjects understand thin belief attributions in the other cases in line with the interpretations I had found most salient:

(a) (in the politician cases) the agent’s retrospectively accessible prior thick beliefs

(b) (in the liar case) the belief that the agent has a mere prima facie obligation not to lie to the employer (as opposed to an all-things-considered obligation, as the agent thinks that the employer has mistreated him).

Judging from the story of the Elitist Politician, EP (a) has no prior thick belief to access, and (b) accepts no general principles of helping that fail to apply on the occasion. Consequently, subjects would have no reason to ascribe thin belief to the EP on either of these interpretations. So it is unclear how these proposed readings would be ruled out by responses here.

By the way, you mention worries about bias in connection to our complex vignettes. If you have specific worries, I’d be very interested in hearing what you have in mind.

(edited for clarity)

john.turri@gmail.com

Thanks, Derek!

If there is one category of belief that does require motivation and one category that doesn't, then there are two ways that this could bear on the internalist/externalist debate. On the one hand, we could interpret each side as reporting intuitions keyed to one or the other category of belief. On the other hand, we could interpret them as talking about the very same category, with one side flat-out misunderstanding it. I know which of these seems the presumptive choice to me!

To that extent, I do think it shows the debate to be pretty much verbal. (There is actually one aspect of this debate that I'm not convinced is merely verbal, but it's a completely separate issue.)

PS: You suggest that some caution is warranted in light of the ambivalent responses that Strandberg and Bjorklund's (2012) observed. Would you mind just copy-and-pasting or linking to the stimuli real quickly?

john.turri@gmail.com

Hi Gunnar,

For some reason, we're just not clicking here. In no way did I suggest that I don't care how people understand the locution. Please allow me to try once more.

The verb "think" picks out an assertive representational propositional attitude in sentences like "he thinks that it's 4 o'clock" or in "at least on some level, he thinks that he ought to help her out." This is the plain meaning of the term. Ordinary people use it this way. Philosophers use it this way. Even developmental psychologists interpret very young children's usage in this way.

Now, you seem to be asking for an argument in favor of that interpretation. But I don't see why we should be expected to argue that these words are being interpreted in the ordinary way, absent positive evidence that they're being used otherwise. You claim to detect the possibility of various other implications or content, and you ask why you should think that people aren't responding in terms of these other implications or content. My answer is two-fold: (1) My linguistic intuitions tell me that this is a very remote possibility, and, more importantly, (2) there is now a pretty robust experimental track-record, all of which is very well explained by people interpreting the relevant words in the ordinary way.

In your response to Wesley up-thread, you offered two separate alternative interpretations of the question in order to maintain a skeptical stance toward the presumptive explanation of words being used and understood in the ordinary way. And that was just for two experiments in this paper. In other papers, we've used a wide range of different examples too. It seems to me that the "alternative interpretation" strategy is unsustainable.

Gunnar Björnsson

Hi John,

Yeah, this has been more difficult than I expected and I suspect that we have long since exhausted the patience of any would-be readers. Thanks though for giving it another try. Also, apologies for getting your last answer wrong; I thought you were saying that you would get the upshot you wanted even if ”on some level X thinks that” picks out the sort of things I suspect it picks out. Back, then to that issue.

Symptomatically, my response too begins with a clarification of an apparent misunderstanding. My worries do not concern the locution ”thinks that”; as I’ve said, it concerns the job of ”on some level”. The locution can operate on a variety of expressions: we might ask whether someone we disagree with is nevertheless right on some level, and ask whether, on some level, someone enjoyed an ordeal. We might also say that someone was relieved on som level, disappointed on another, or appreciates his parents’ sacrifices on some level, but is angry with them for not caring on another. If I’m getting you right, you think that when this locution operates on the ”thinks that” locution, it has a definite effect: it leaves the content indicated by the that-clause intact (compared to the corresponding expression without ”on some level”) and leaves the attitude towards that content a mere holding true, stripping away further commitments. To me, it seems that the locution can do and does other things too, when attached to ”thinks that” and in other contexts: it can change the content (”I guess that’s right on some level” ≈ ”I guess that’s right if understood in a certain way, [a way that’s different from what first comes to mind, or different from what’s most centra relevant in the context]”) or indicate that the attitude in question is in some sense partial rather than all-told (”he enjoyed it on some level” ≈ ”there was an element of enjoyment [not implying that it was enjoyable all told]”). These were also the sorts of interpretations that came to mind when I read the cases and questions you work with in this paper and the paper with Rose on belief and knowledge.

Now, you have two replies. The first is that your linguistic intuitions tell you that my readings are very remote possibilities. I think that your linguistic intuitions carry some weight, and this gives me some reason to bracket my own intuitions a little. Of course, it is easy to have one’s interpretation coloured by one’s theory, but I have theories too (though perhaps not ones with very clear implications in this case) and have the epistemological disadvantage of not being a native speaker of English. Still, I think that the general pattern of use of the locution suggest that the possibilities are not so remote.

The second reply is that there is a pretty robust experimental track-record ”very well explained by people interpreting the relevant words in the ordinary way”. Of course, I’m inclined to agree with this, though we have different views about what the *ordinary way* of understanding these words is. We probably agree sufficiently about the ordinary interpretation of ”thinks that”, but we disagree about ”on some level”. Now, as you also say, I’m providing different interpretations for different cases, and you think that my ”alternative interpretation strategy” is unsustainable, presumably because it is less general than yours and ad hoc.

Here I should first say that I don’t have a *strategy*, strictly speaking, because I don’t have a goal: I’m not trying to save some alternative theory, and your hypothesis is compatible with my other commitments. In earlier writing I have myself proposed that the conceptual question itself cuts little ice exactly because there might be different conceptual commitments and that more fruitful questions concern the nature of actual moral thinking. Rather, what has been driving my questions is that I have a intuitive sense of how I understand the thin belief probes in your studies, and these understandings seem to diverge from the understanding that you operate with. Moreover, looking at the wider use of ”on some level” locutions, it seems to me that analogues of my way of understanding the thin belief probes are well represented. So, apart from your intuition that what strikes me as the most natural interpretations are generally farfetched (which, I admit, carry some weight) I don’t yet see why subjects wouldn’t read thin belief probes my way, or why they would read them in the Dretske sense of merely holding true.

Of course, the fact that a variety of uses of ”on some level” figure in ordinary language doesn’t show that the Dretske sense of thin belief isn’t the one operative in subjects when they respond to your probes. Perhaps there is a highly plausible general account of how ”on some level” works that predicts the intended reading when applied to the cases you are working with. I've tried to see what such an account might be but haven't found one yet.

Wesley Buckwalter

Hey Gunnar,

Thanks again for restating your worries. With respect to the “holding true” worry, I guess I am still genuinely puzzled. Our point is that people are both willing and unwilling to attribute belief absent motivation depending on the conception of belief applied. So I don’t see how the concerns you raise for the internalist test regarding a reluctance of participants to attribute belief get off the ground. They are willing to attribute belief. As for your indiscriminate thin belief worry, while it’s an interesting possibility, I just think it doesn’t cohere with prior findings in several papers now on this distinction---this paper on Moral Motivation, our Nous paper on entailment you mention above http://onlinelibrary.wiley.com/doi/10.1111/nous.12048/abstract or another paper we’ve done on delusional attitudes http://philpapers.org/rec/ROSWWS. I’m not sure what else to say on this front that will be too helpful, other than that people who are curious should definitely check these papers out and decide for themselves if there’s any evidence to support your worry. Thanks again,

Wesley

Gunnar Björnsson

Hey Wesley,

I agree with your point that people are both willing and unwilling to attribute belief (or belief-related states, depending on how ”belief” is understood) absent motivation. Indeed, this is closely related to what we found in our studies, where people were more willing to attribute ”understanding” than to attribute ”belief”, and was one of the hunches leading us to ask for attributions of related states. I also agree that *willingness* to attribute belief in the absence of motivation is evidence that people have *externalist* intuitions. My question concerned whether *unwillingness* to attribute belief would be evidence that people have *internalist* intuitions. The worry here was that people might be unwilling to attribute belief not because they accept an internalist, conceptual, requirement that moral belief motivates, but because they take the absence of motivation to provide strong prima facie evidence for the absence of sincerely held moral belief, i.e. because of something that externalists are happy to acknowledge. Now, I think that *there is* evidence that people have internalist intuitions about the sort of state(s) tracked by questions about whether someone believes that such-and-such: I think that the studies we present in our paper provides such evidence, as we try to make sure that non-motivational aspects of moral judgment are represented in the scenario. My concern was with whether the results in your study provided additional evidence to that effect, thus corroborating our findings.

With respect to my concerns about what your probes for thin belief reveal, I think that you are right that we have reached the end of the road here. I look at the probes used in your various studies and my impression is that what is picked out varies and doesn’t track some unified kind of state of holding true, but sometimes relations to earlier holdings-true, sometimes to holdings-true of a different content than what is picked out by the thick belief probe, sometimes to one sort of restricted behavioral competence associated with paradigmatic cases of believing, and sometimes to another. I also think that much of this variation is displayed by uses of ”on some level” in relation to other locutions than ”thinks that”. You have a different impression of how your thin belief probes will be interpreted. I think that's fair enough. Even if you guys don't think that my worries are any cause for concern or need to be followed up, I do find it helpful to know the source of our differences.

And as I said before, I am curious to hear more about your worries about biasing subjects in relation to the complex vignettes used in our studies. Bias is of course always a worry (and a problem that we were anxious to avoid), but I got the impression that you might have had something more definite in mind. If so, I’m all ears, as that would be something I'd want to follow up.

Wesley Buckwalter

Hey Gunnar,

Thanks again for clarifying. Switching gears a little, I didn't mean to imply earlier that there was some specific bias in the longer stimuli that you used in your experiments. I was only suggesting that, regarding the worry of specifying protagonists' prior states, there seems like maybe there's a trade off where too little or too much specification might impede the test for internalist support. But now that you ask, I do have a few questions about your paper:

You observed in several studies that participants answered belief questions at rates that would be expected simply by chance. Could you say a little bit more about how that particular finding supports internalism in your view over externalism?

Could you explain the purpose of the inner struggle study? You say it was conducted to rule out a worry that subjects of previous studies weren’t paying attention when taking those studies. But how could a completely different study accomplish this, or rule out other explanations of chance rates?

What do you make of the listless case, one of the strongest results of the paper, in which 70% of participants attribute moral belief without any motivation?

For the second strongest result, the no reason case, you get 64% denying belief. Stacking that up against the listless case, and the other results at chance, it seems like your evidence is pretty split over which view has intuitive support, is that correct?

Almost all of the studies included some variant of a very long set up that described protagonists as highly abnormal, unfeeling psychopaths who ‘classifies actions using expressions like ‘morally right’ and ‘morally wrong’’ but that this doesn’t “in any way influence her choices”. Derek points out above that maybe the specific psychopath details are playing a role in people’s judgments. I was wondering, were you worried that saying the protagonists make moral classifications and then asking about their moral beliefs or their moral understanding, while a staple of the philosophical debate, might genuinely confuse people?

Thanks again, looking forward to you setting me straight on some of these questions!

Wesley

Gunnar Björnsson

Thanks Wesley, those are good questions – much appreciated! I've been travelling and have a tight schedule until tomorrow night, but hope to be back with questions then. Cheers,

Gunnar

Gunnar Björnsson

Hey Wesley,

Sorry for the delay, and thanks for your thoughtful questions. Let's see if I can assuage some of your worries.

You asked: ”You observed in several studies that participants answered belief questions at rates that would be expected simply by chance. Could you say a little bit more about how that particular finding supports internalism in your view over externalism?”

First, I should say that we do not take the results to show that people in general have one unique uniform conception of moral beliefs (or more precisely beliefs about moral wrongness, which is our specific focus) that is internalist in nature. At least some of us authors think that people might have different conceptions of various closely related kinds of states, and that different conceptions might be more in the forefront for some people rather than others.

With that said, there are three steps to the argument. First, we find it plausible that people understand the vignettes in the intended way, i.e. as involving the characteristic cognitive processes of moral judgment but no hint of moral motivation. (More about that assumption in connection to another question of yours.)

Second, in the scenarios where motivation was completely absent (as opposed to temporarily suppressed or disengaged) between 54 and 64% of subjects withheld attributions of wrongness belief. Given the first assumption, it is unclear why anyone would withhold belief attribution unless they took motivation to be a necessary requisite of moral belief. If people merely took the absence of motivation to provide prima facie evidence of absent moral judgment (as externalist claim), the explicit mentioning of the cognitive processes associated with moral belief should have counteracted this. Based on this, then, we seem to have pretty good reason to think that a majority operates with an easily triggered internalist conception of moral belief.

Third, we also think that we have some evidence that many attributions of wrongness belief in cases of completely absent motivation are attributions of something other than the states that internalists have been theorising about. Faced with the explicit ”inverted commas” scenario, 45% percent of subjects were willing to attribute wrongness belief, even though only 20% would say that the agent ”herself thinks” that what she did was wrong. (It is striking that the percentage of attributions of moral belief in the inverted commas scenario wasn’t significantly different from what we saw in the standard scenarios where motivation was completely absent.) This suggest that a large group of subjects are attributing moral belief in a wider sense than that which has concerned at lest internalists. And the latter sort of attributions do not obviously speak against internalism (as no internalist has denied that we can make *that* sort of judgment without being motivated).


***

You asked: ”Could you explain the purpose of the inner struggle study? You say it was conducted to rule out a worry that subjects of previous studies weren’t paying attention when taking those studies. But how could a completely different study accomplish this, or rule out other explanations of chance rates?”

Now, we actually had several reasons not to think that subjects answered the belief questions randomly. First, in our pilots, we had asked subjects to provide for free motivations of their answers, and the answers all seemed to make sense in light of the scenario given either internalist or externalist conceptions of moral belief. Second, we asked subjects how confident they were about their answers to the attribution questions (”very low” to ”very high” on 7-point Likert question), and got overall very high scores (I don't have the complete data at hand, but a quick look at the studies in SurveyMonkey and across a number of studies the mean confidence was around 5.4), in particular among those who gave negative answers to attribution questions. Third, the fact that subjects attributed different states (”understands”, ”believes”, ”herself thinks”) suggests that the locutions matter, and that there is something about attributions of moral belief in particular that makes for the distribution of answers rather than general confusion. Still, we wanted to make sure that the vignette did not confuse subjects and decided to further test this by providing a scenario with most elements of the other scenario (thought processes, action), but with motivation present in such a way that most philosophers would be willing to attribute moral belief. Thus the inner struggle study. The scenario is of course different in various ways, but similar to the scenarios without any hint of even latent motivation in the respect that we worried might confuse subjects.


***

You asked: ”What do you make of the listless case, one of the strongest results of the paper, in which 70% of participants attribute moral belief without any motivation? For the second strongest result, the no reason case, you get 64% denying belief. Stacking that up against the listless case, and the other results at chance, it seems like your evidence is pretty split over which view has intuitive support, is that correct?”

A quick look now suggests that we could have been much clearer on this point. (Perhaps we can add some clarification here when sending the final version.) Most internalists these days reject strong forms of internalism demanding that one is motivated to act on one’s moral beliefs whatever one’s psychological state. So most who think that an individual’s moral beliefs have a necessary connection to her motivation accept some form of conditional internalism: moral judgments motivate under conditions of practical rationality or suitable psychological normality (some go for an even weaker, communal, form of internalism, where it is enough that enough people in their linguistic community are motivated in the right way, but we were concerned with individual forms in our studies). On one sort of view, for example, moral beliefs are states that disposes one to be motivated (in the colloquial sense), or that have as their function to produce motivation, but the expression of these dispositions can be blocked in various ways, when the normal routes by which these states perform their function are not properly operating. A general motivational disorder like listlessness would be a prime example of what can block their expression, but doesn’t show that the state itself is absent. If subjects understand moral beliefs along these lines, we should expect fewer attributions of belief in cases of listlessness than when there is nothing that explains why the moral beliefs don’t motivate. And this is what we see. By contrast, subjects should be maximally reluctant to attribute belief when motivation is clearly missing and there is no plausible explanation of how a disposition is blocked. This is what the No Reason case was supposed to represent, and indeed here subjects were most reluctant to attribute wrongness beliefs.


***

You asked: ”Almost all of the studies included some variant of a very long set up that described protagonists as highly abnormal, unfeeling psychopaths who ‘classifies actions using expressions like ‘morally right’ and ‘morally wrong’’ but that this doesn’t “in any way influence her choices”. Derek points out above that maybe the specific psychopath details are playing a role in people’s judgments. I was wondering, were you worried that saying the protagonists make moral classifications and then asking about their moral beliefs or their moral understanding, while a staple of the philosophical debate, might genuinely confuse people?”

We were a little worried before we ran the studies, which is why we asked for free-form motivations of attribution answers in our pilots, and looked at the other aspects mentioned above in my answer to your question about the inner struggle case. It is of course always possible that people were confused by details in the study, and the descriptions of psychopathic traits are no exception. For the moment, though, it is not clear to me in what way or why one should expect confusion here. One such way might have been that people would be eager to blame the psychopath, and that this would affect attributions. But we did test the hypothesis that blame was doing work but found no evidence of that. Of course, we also ran the No Reasons case, which doesn’t contain the details, and that made for significantly different answers. Still, I am a little unsure why we should attribute this to confusion. I’m more inclined to think that some subjects might have taken the egoism and lack of empathy to signify features blocking existing dispositions to be moved by moral beliefs, as the hypothesis that attributions are affected by perception of such features gets some support from the distribution of answers across Listlessness, Psychopath and No Reason scenarios. But it is admittedly speculative.

Wesley Buckwalter

Hey Gunnar,

I wanted to write in to say thank you again for such thoughtful responses! This definitely clears up a few points of confusion, especially concerning how you were taking the results to bear on the qualified or restricted versions of internalism.

Wesley

The comments to this entry are closed.