There is a problem in science today. I've written a lot about how to cure it, but in this post I want to outline the nature of the disease as I see it. The problem goes by many names:
researcher degrees of freedom
undisclosed analytic flexibility
the file drawer
"Why most published research findings are false."
So I'm going to call it the f problem for short. I like to visualize f as a forking path. Given any particular set of raw data, a researcher faces a series of choices about how to turn it into a 'result'. There are choices over which statistical tests to run, on which variables, after excluding which outliers, and applying which preprocessing... and so on:
The f problem is that researchers can try multiple approaches in private, and select for publication the most desirable ones. Most often, it's statistically significant effects, that match with prior hypotheses, that are desired. Even if there are no real effects of interest in the data, some comparisons will be 'positive' just by chance. Researchers today face pressure to publish 'good results' and are rewarded for doing so - this is what turns f from a theoretical concern into a real one. Yet f is not a problem for all of science. Broadly, f only affects research in which the results take the form of p-values. Thus fields like mathematics, where results are proofs, are immune. But even some p-value sciences manage to escape f. This is because in these fields, the nature of the enterprise means that 'everyone knows' what experiments are being carried out, and how the data ought to be analyzed - in advance. In particle physics, for example, it was public knowledge that CERN was looking for the Higgs Boson, and it was also known exactly how they planned to look for it in the data. Openness meant that was no room for f, because f is all about the scope for hidden flexibility.