A little intellectual autobiography
My interest in moral psychology – particularly, in the idea that the study of moral judgment might be usefully pursued by borrowing some ideas from linguistics –was an opportunistic development. As a grad student at MIT, one could not help but notice the parallel between being asked for native speaker acceptability judgments by Linguistics grad students and being asked for permissibility judgments about trolley problem cases by Judy Thomson. My dissertation was about tacit knowledge of language, but my first job had me teaching quite a bit of moral philosophy – using, by the way, John Fischer and Mark Ravizza’s wonderful 1992 text. So, quite naturally, I was led to think more about what moral judgment is, what makes it possible, about whether linguistic diversity might be a model for moral diversity, and so on.
Coming across Elliot Turiel’s work in the early nineties prompted me to consider whether the typical child’s (moral) environment is impoverished relative to the moral abilities she acquires, in other words, whether a poverty-of-the-moral-stimulus argument was at all motivated. Mark Baker's work got me thinking that it might be worth investigating the idea that moral differences might be explicable in terms of something akin to linguistic parameters. And, more recently, John Collins’ superb papers in the philosophy of linguistics proved immensely helpful in thinking about the relation between native-judgers’ moral judgments and moral theory. (See esp., Faculty Disputes. Mind & Language 17 (2004): 300-333, and Linguistic Competence Without Knowledge. Philosophy Compass 2/6 (2007): 880-895.)
I always believed that empirical findings could be brought to bear on some of the claims about moral judgment I was entertaining: for example, that moral judgment is made possible by the operation of a domain–specific faculty of the mind; that the intractability of some moral disagreements might be understood in terms of constraints (parameters) that emerge early in ontogenetic development as interaction effects of the child’s innate endowment and certain features of her environment. But I was not trained as an experimentalist and at least my version of the so-called linguistic analogy model (cf. Mikhail’s version 2011) remained insufficiently articulated to hand off to friends in social psychology or anthropology to design the relevant experiments and field studies.
The explosion in the empirical investigation of moral cognition or moral judgment was not triggered by proponents of the linguistic analogy, but rather (it seems to me) received its impetus from two papers published in 2001: Josh Greene et al.’s fMRI study of subjects considering trolley problem cases and Jon Haidt’s paper attempting to rattle the Kohlbergian hegemony in developmental moral psychology. (Greene, J.D. et al. An fMRI study of emotional engagement in moral judgment. Science, 293 (2001): 2105-2108; Haidt, Jonathan.The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review 108.4(2001): 814-834.)
In my first post, I advertised that I’d try to explain my skepticism about this experimental work on moral judgment. That skepticism has two sources: (1) the models of moral judgment that are tested or challenged in a good deal of extant work are, like my own view, under-articulated; and (2) the majority of experimental moral psychology either assumes an empiricist conception of the human mind and/or falls prey to the “seduction of reduction”. (1) and (2) are not completely independent, but I’ll treat them as such for ease of exposition.
Today, then, a start on the under-articulation problem.
As I see it, there three dimensions to this problem. (Since I intend only to raise questions and not to attack particular individuals, I am not going directly to cite many specific studies, unless they happen to be usefully illustrative.)
1. Target explananda are not precisely identified; in particular, across and sometimes even within studies, there is equivocation about a central expression, viz., “moral judgment”.
2. Models pay insufficient or the wrong kind of attention to the developmental etiology of the target explananda.
3. Models are anchored in considerations that were not designed for thinking about the psychology or cognitive science of judgment.
I’ll start with #1.
To say that one studies moral judgment, period, is to leave it unclear whether one is interested in (A) a particular kind of product or (B) a particular kind of process. I’ll use the lower case moral judgment to refer to products, and upper case Moral Judgment to refer processes.
Construed as products, moral judgments might be any or all of the following:
a. utterances
b. beliefs
c. attitudes
d. speech acts
e. intuitions
f. propositions
If your concern is meta-ethical, and you’re worried about whether moral judgments are truth evaluable or intrinsically motivating, then you’ll be careful to disambiguate. But that kind of care is absent in the vast majority of empirical work in moral psychology – even when investigators manage not to equivocate between “judgment”, “reasoning”, and “decision-making”.
But what comes out of people’s mouths may bear only a tenuous relation to their attitudes or beliefs. And I know of no clear account of how, empirically, to distinguish intuitions from beliefs.
Construed as a process, Moral Judgment is variously described as something that
a. generates moral judgments
b. drives moral judgment
c. plays a role in forming moral judgments
d. produces moral judgments,
or, as
e. the process of reasoning to moral judgments
or, in some sense,
f. responsible for an individual coming to have or finding herself with a moral judgment.
These lists are no doubt only partial, but the potential for question-begging and for confusion of other kinds should be obvious.
Of course, I get it that if you’re running a study about “moral judgment”, you’ll need to operationalize the notion. But even here, investigators have employed quite different methods: subjects have been asked to enter rating on a Likert scale about whether an imagined action is permissible or forbidden; they've been asked how wrong an action is; they’ve been asked how severely an agent ought to be punished for performing an action; they’ve been asked whether an action is ‘okay’. We should be cautious in drawing conclusions about the nature of moral judgment from ‘converging’ bodies of evidence when that ‘evidence’ has been arrived at on the basis of quite different assumptions. Indeed, it is challenging even to try to compare various studies.
Still, it is not as if philosophers don’t get into the same sort of pickles. “Moral judgments require backing by reasons. . . . One must have reasons of else one is not making a moral judgment at all (Rachels, 1993; see also Kennett & Fine, 2009)”; moral judgments are intrinsically motivating (Smith, 1994); moral judgments have a distinctive phenomenology (Horgan & Timmons 2007; Glasgow, 2013); moral judgments are performatives (Gibbard 1990).
This is stipulation of a quite different kind. And I understand why people engaged in the empirical investigation of moral ‘judgment’ – heavens, I've made it difficult to use the term myself – would be impatient with the (sometimes too quick) dismissal of their work by philosophers who are sure about the necessary conditions for any judgment being a genuine moral judgment.
In any event, notice two other matters that require some kind of resolution before we can articulate a model of moral judgment (and of Moral Judgment) that could be used to test how Moral Judgment works in human beings. First, there is the question of what it is to make a judgment or engage in judgment. And second is the question is what (if anything) makes a judgment or an event of judging moral (as opposed to non-moral).
On the former, I encourage readers to take a look at the essays in Mental Actions, edited by Lucy O'Brien and Matthew Soteriou (Oxford: OUP, 2009). I may have some more to say about this anon.
On the latter, I don't pretend that it's news that there is no definitive account of how to individuate a distinctly moral domain, and that might have serious implications for how to think seriously about and empirically investigate moral judgment and moral cognition. See Josh Greene's recent notes on this. Something else on which I might comment anon. (The Rise of Moral Cognition. Cognition 135 (2015): 39-42.
But the long and short of it is that there is very little agreement about what, precisely, is being studied under the general rubric of 'moral judgment'.
How to address this:
People are at liberty of course to stipulate what they shall mean by the expression “moral judgment”. And it would be a step towards clarity if they would do just that.
But there is an wide open invitation here for philosophers to revisit the question of what a judgment is, whether judgment is a mental action. This is all for now fully consistent with the idea that there is no domain specific thing moral cognition.
We should never lose sight of the fact that how adult abilities bear the marks of their development. So the children are important, as are other populations. But what we want to know is not how could this phenomenon exist, but why does this exist rather than something else. More on that issue next week when I take on the views about the mind and reduction.
What are we studying – object of inquiry and development (always a plus of the LA for me)
How to find a way to talk about the conflation of psychology and philosophy?
Recent Comments