If a result fails to replicate, then it is not a finding but rather an unlucky aberration. This post isn’t about replication failure (or replication-failure failure). It’s about something I call “unfinding.”
The following combination is not an uncommon occurrence:
(1) a result (“R”) replicates and is a finding, but
(2) different stimuli or procedures produce a different result (“R*”).
In many cases, the combination of (1) and (2) provides evidence for a more detailed or precise interpretation of R. Of course, in any particular case, people might reasonably disagree about whether the added detail constitutes a significant advance in our understanding of the underlying issues. Still, I think that there should be a presumption in favor of welcoming and encouraging work aiming to add detail.
Unfinding goes beyond merely reinterpreting R. Instead, unfinding occurs when (1) and (2) support a more radical conclusion, namely:
(3) R is uninformative (relative to the primary research question).
Uninformative findings can be set aside. They do not constrain future research on the topic, except insofar as it should not repeat the mistake.
Here is a hypothetical example illustrating an unfinding. Someone hypothesizes that the concept of physical beauty is partly constituted by visual symmetry. In a series of studies, manipulating visual symmetry strongly affects beauty judgments (R). This is taken to support the hypothesis. Subsequent research reveals that the symmetry manipulation was systematically confounded with changes in hue or saturation, and once those factors are controlled for, symmetry does not affect beauty judgments (R*). In this case, I think it’s fair to say that the original result is not genuinely informative relative to the primary research question. As an indication of this, if subsequent research on beauty judgments failed to find an effect of visual symmetry, the authors would not be obliged to explain the apparent inconsistency with the original result.
Unfinding is a possibility for just about any result, and we should be open to it in principle. Nevertheless, I do not think that there should be a presumption in favor of welcoming or encouraging work that aims at unfinding.
I have two reasons for this, both based on introspective and social observation from experience as a referee, author, and editor. On the one hand, encouraging researchers to prove that prior work is ultimately worthless corrodes sentiments essential to a healthy community of inquiry, including the desire to cooperate. On the other hand, people do not seem to be very good at unfinding what they claim to unfind. In particular, they seem prone to move too quickly move from (1) and (2) to (3), or something in the ballpark of (3). People who do not like a result seem especially prone to this error (and, incidentally, I think that people often underestimate how clearly their dislike for a finding can shine through).
To illustrate an erroneous inference of this sort, consider a variation of the hypothetical example about the concept of physical beauty. Again, a series of studies reveals that people’s beauty judgments are strongly affected by manipulating visual symmetry (R). Subsequent research reveals that the symmetry manipulation works only when people use binocular vision: it goes away for monocular vision (R*). Surely this should affect our understanding of the original result. But it would be hasty to conclude that the original result is uninformative relative to the primary research question.
In light of all that, I think that we should operate with a defeasible presumption against unfinding.
Interesting issue, John. I'm sympathetic, but I wonder: Is the problem with welcoming or encouraging WORK aiming at unfinding, or is the problem simply encouraging such AIMS? In other words, I worry about discouraging this work just because of the aims, since it can be quite important work when the right conclusion is drawn. In the spirit of keeping science self-corrective, shouldn't we worry about discouraging work that's critical of previous results?
Posted by: Josh May | 03/14/2016 at 12:12 AM
Thanks, Josh. Good question! As I see it, the problem pertains to people aiming for that outcome while being prone to hastily concluding that they have achieved it. I don't have any problem with work that actually demonstrates the outcome. I don't even have a problem with the aim itself. (My thick fallibilist skin might make me peculiar on this point — others I've spoken to disagree.) But my sense is that when coupled with the tendency toward motivated inference, the aim causes problems. These problems include eroding beneficial sentiment and distorting the record. As a counteractive incentive, I suggest operating with a defeasible presumption against work that aims at unfinding. There might be other solutions.
I agree that critical work is very important and I definitely don't want to discourage anything merely because it is critical.
Posted by: John Turri | 03/14/2016 at 11:19 AM
Hi John, I think I have worries similar to Josh's. I was hoping you could say more. If you are willing to, I am hoping we can engage with a real example, such as the following:
Rose, Buckwalter, and Nichols (RBN; 2016) "Neuroscientific prediction and the intrusion of intuitive metaphysics" (http://tinyurl.com/RBN-neuroprediction), if I am understanding you correctly, should count as an attempt of unfinding. In the paper, RBN provide evidence that results reported in Nahmias, Shepard, and Reuter (NSR; 2014) "It's OK if my brain made me do it: People's intuitions about neuroscientific prediction and free will" (http://tinyurl.com/NSR-Neuroprediction1) were uninformative--not just better explained by some other explanation but genuinely uninformative--because we did not take into account an error people were making when they processed our scenarios.
While I think their unfinding claim was a bit hasty, the community would be worse off if their paper was not published for, at least, two reasons: (1) They drew the community's attention to a potential methodological issue that, if right, may be a problem for many x-phi and philosophy-inspired-psychology experiments. (2) Their work has forced my colleagues and me--and I would guess others--to think more carefully about what people believe about free will and how we should be measuring these beliefs. As we all work through the issues RBN raised, I am sure we will make progress on coming to a better understanding of people's beliefs about free will.
Why would the community not want such insightful and challenging work published?
Posted by: Jason Shepard | 03/14/2016 at 11:44 AM
PS I should note that your answer to Josh appeared while I was composing my post. Your response to Josh helped answer my worry. But let me ask a related question: As a reviewer or editor, what do you look for when you are trying to determine if the attempt at unfinding should be published? (Or, if you prefer, what do you think we should be looking for as a reviewer or editor?)
Posted by: Jason Shepard | 03/14/2016 at 11:55 AM
Hi, Jason. Thanks for pressing on this! I definitely think that the community should want insightful and challenging work published. At the same time, I think that there are other considerations in play here. I proposed a defeasible presumption as a hedge against some non-ideal factors. Do you think there is a better solution? Or maybe we're just stuck with them because anything we try will just make things worse overall?
Posted by: John Turri | 03/14/2016 at 12:12 PM
Hi John. Thanks for pressing me! After given your proposal more thought, I've come to appreciate your position more and more. There is a certain way of doing research that seems to undermine a sense of community and cooperation, namely the sorts of research projects that go beyond being critical and actively seek to undermine previous research. I, too, believe a strong sense of community and cooperation is something we should strive for. But I think there is a way to frame "undoing" research that can come across less adversarial . Unfortunately, we as a community are not always very good at doing this. I also believe that there are competing interests that, perhaps, trump the desire to maintain a strong sense of community and cooperation, namely the interest to publish research that advances the field. Perhaps the approach I would feel most comfortable would be not to hold a presumption against recommending this sort of work for publication but to encourage the author(s) to make adjustments to the tone of their work in the revise and resubmit phase. But ultimately I think recommendation to publish comes (primarily) down to two factors: (1) Does the work advance the field? (2) Are the methods sound?/Are the arguments compelling? If the answer is "yes" to both of these questions, then the research should be published, regardless of tone. If the answer is "no" to either of these questions, then the research should not be published, regardless of tone.
(Apologies for framing my earlier question in such a loaded way. That was unnecessarily adversarial of me!)
Posted by: Jason Shepard | 03/15/2016 at 10:54 AM
Hi, Jason. I actually wasn't thinking of this primarily as something to be litigated as part of the formal review process, but rather as a value that we as people bring to our work and seek to encourage in others. Still, as you point out, the review process provides opportunity for us to encourage a more human touch.
Factors 1 and 2 do seem to make a good case for publishability. Of course, it can be notoriously difficult to assess "how important" the advancement is. Other important qualities I'd add are that the project is appropriately contextualized and that the results are responsibly interpreted. (Perhaps you were thinking of these as falling under 1 and 2.) The interpretation stage is where the hastiness I described occurs, especially in virtue of overlooking potential limitations or alternative explanations.
Of course, I also strongly encourage people to try their best to not take research developments personally. That is very important! And, at the end of the day, I think we should prefer advancing the field to sparing people's feelings. But, human nature being what it is, I think that the field will advance more in the long run if we don't routinely force a choice between those two things.
PS: And sorry for missing your earlier PS. I initially overlooked it in the approval queue.
Posted by: John Turri | 03/15/2016 at 11:59 AM
Thanks, John! As is often the case, what may first appear as a disagreement, however slight, is often discovered not to be after a bit of dialog. I think your proposal is right. We all can be better members of our community without sacrificing the publication of good research, if we try to be a little more self aware of tone and (possible) hasty conclusions, especially when these hasty conclusions are of the kind you note.
Posted by: Jason Shepard | 03/15/2016 at 01:17 PM
Hey Jason,
I just wanted to chime in to say that I definitely didn’t think your paper was uninformative. I thought the findings presented by NSR were definitely suggestive of some very powerful factors at work in people’s judgments about futuristic imaginative scenarios. I’m also glad to hear that you found the methodological issues raised by RBN instrumental moving forward. I never thought of RBN an “unfinding” claim before, though maybe on some definitions it basically amounts to that. In my view of the exchange, I am just not yet confident the original conclusion has been sufficiently demonstrated by data, independently of the particular theoretical dispute in question.
Wesley
Posted by: Wesley Buckwalter | 03/15/2016 at 02:11 PM
Well said, Jason, and thanks for helping me get clearer in my own thinking about all of this!
Posted by: John Turri | 03/15/2016 at 03:12 PM