Shakespeare knew what he was doing when he asked, “what’s in a name?” As always, his meaning was double (or more), and the play’s plot gives us a completely different answer than the rhetorical one we usually supply.
Names can matter, not only among love-sick teenagers from xenophobic families, but in science, teaching, and in communication.
In an unpublished paper, Gelman and Hennig propose to change the names, or designations, “Objective” and “Subjective”, in statistics to more meaningful, and less misleading, ones.
Paper — Gelman, Andrew; Hennig, Christian. 2015. “Beyond Subjective and Objective in Statistics“.
We propose to abandon the words “objectivity” and “subjectivity” in statistics discourse and replace each of them with broader collections of attributes, with objectivity replaced by transparency, consensus, impartiality, and correspondence to observable reality, and subjectivity replaced by awareness of multiple perspectives and context-dependence. The advantage of these reformulations is that the replacement terms do not oppose each other. Instead of debating over whether a given statistical method is subjective or objective (or normatively debating the relative merits of subjectivity and objectivity in statistical practice), we can recognize desirable attributes such as transparency and acknowledgement of multiple perspectives as complementary goals. We demonstrate the implications of our proposal with recent applied examples from pharmacology, election polling, and socioeconomic stratification.
One problem is that the terms “objective” and “subjective” are loaded with so many associations and are often used in a mixed descriptive/normative way.
On p-values and the myth of “objectivity” in stats:
… significance testing is used as a tool for a misguided ideology that leads researchers to hide, even from themselves, the iterative searching process by which a scientific theory is mapped into a statistical model or choice of data analysis.
How this connects to falsification:
Falsificationist Bayesianism follows the frequentist interpretation of the probabilities formalized by the sampling model given a true parameter, so that these models can be tested using error statistical techniques (with the limitations that such techniques have, as discussed in Section 5.2). Gelman and Shalizi argue, as some frequentists do, that such models are idealizations and should not be believed to be literally true, but that the scientific process proceeds from simplified models through test and potential falsification by improving the models where they are found to be deficient. This reflects certain attitudes of Jaynes (2003), with the difference that Jaynes generally considered probability models as derivable from constraints of a physical system, whereas Gelman and Shalizi focus on examples in social or network science which are not governed by simple physical laws and thus where one cannot in general derive probability distributions from first principles, so that “priors” (in the sense that we are using the term in this paper, encompassing both the data model and the parameter model) are more clearly subjective.
A central issue for falsificationist Bayesianism is the meaning and use of the parameter prior, which can have various interpretations, which gives falsificationist Bayesianism a lot of flexibility for taking into account multiple perspectives, contexts, and aims but may be seen as a problem regarding clarity and unification.
Gelman has more on what I prefer to call Bayesian Falsification (better name), which I looked at HERE.
Now go lift something heavy,