But we may look at the purpose of tests from another view-point. Without hoping to know whether each separate hypothesis is true or false, we may search for rules to govern our behaviour with regard to them, in following which we insure that, in the long run of experience, we shall not be too often wrong. Here, for example, would be such a "rule of behaviour": to decide whether a hypothesis, H, of a given type be rejected or not, calculate a specified character, x, of the observed facts; if x>x_0, reject H, if x≤x_0, accept H. Such a rule tells us nothing as to whether in a particular case H is true when x≤x_0, or false when x>x_0. But it may often be proved that if we behave according to such a rule, then in the long run we shall reject H when it is true not more, say, than once in a hundred times, and in addition we may have evidence that we shall reject H sufficiently often when it is false. —Jerzy Neyman and Egon Pearson, 1933
It's typically easy to make a conceptual analysis task sound interesting and important, at least to the philosophically inclined. Here's a way to motivate the question "What is evidence?" that I have used in teaching. Start with the question "What is science?" or more specifically, "How is science different from (and better than?) other ways of attempting to learn about the world, e.g. astrology, reading sacred texts, etc.?" This question is easy to motivate. Take debates about whether or not intelligent design should be taught in public school science classrooms. Typically, these debates center on whether intelligent design is science or not. In the Kitzmiller vs. Dover Area School District case, for instance, philosophers of science were brought in to testify on this issue. Thus, the question of what constitutes science is central to a big debate in our society that a lot of us already care about.
According to many (most?) philosophers of science today, the question "What is science?" is a red herring. The views of the judge in the Kitzmiller case notwithstanding, the important question is not whether, e.g., intelligent design is science or not, but something more like whether intelligent design is well supported by our evidence or not. Thus, rather than asking "What is science?" we should be asking "What is evidence?"
In general, it is easy to motivate the task of conceptual analysis by pointing out that the concept being analyzed does some important work for us but is not entirely clear. However, conceptual analysis has many problems. For one, it almost never works. Typically, a conceptual analysis is produced; then a counterexample is presented; then the analysis is reworked to get around that counterexample; then a new counterexample is produced; then the analysis is reworked again to get around the new counterexample; and so on until the analysis becomes so ugly and messy that nobody finds it very appealing any more. At some point in this process, a rival analysis is produced which then undergoes the same process. The result is a collection of ugly and messy analyses none of which really work but each of which has vociferous defenders who engage in interminable debates that to most of us seem boring and pointless, even though the question being debated can be made to seem interesting and important.
Perhaps there is a deeper problem with conceptual analysis, namely that questions of the form "what is x?" are not actually interesting and important after all. Recall the motivation I gave for the question "what is evidence?" I said that many philosophers believe that the question of whether intelligent design should be taught in public school science classes should be focused on the question of whether intelligent design is well supported by evidence or not, rather than the question of whether intelligent design is science or not. Thus, the important question is not, "what is science?" but rather "what is evidence?" However, it's not obvious that we need an answer to the question "what is evidence?" in order to judge whether intelligent design is well supported by evidence or not. In fact, many philosophers call acts of reasoning that assume that we cannot know whether something is an x until we have a definition of x instances of the "socratic fallacy." We can tell whether something is green or not just by looking at it. We don't have to know what green is. Evidence seems to work the same way, to some extent at least.
Perhaps we don't need a conceptual analysis of "evidence" in order to judge whether intelligent design is well supported by evidence or not, but such an account would nevertheless be useful in clarifying our intuitive judgments and allowing us to articulate and defend them. That seems right, but it's a moot point if attempts to give a conceptual analysis of "evidence" are bound to fail as nearly all attempts at conceptual analysis fail.
If not conceptual analysis, then what should philosophers be doing? One option is to forget about defining terms like "evidence" and instead to develop axiomatic theories of such notions. Philosophers can operate in "Euclidean mode" rather than "Socratic mode," as Glymour puts it. Euclid did give "definitions" of terms like "point" and "line," but they were more like dictionary entries than conceptual analyses. They don't hold up under philosophical scrutiny. But so what? Euclid developed a beautiful and powerful theory about points and lines and such. It's not clear what rigorous definitions of "point" and "line" would add to his theory, practically speaking.
The same is true of probability theory and the theory of causation. Philosophers still debate about what "probability" and "causation" mean, but these questions have little or no importance for working statisticians. On the other hand, the theories based on Kolmogorov's axioms and on the Causal Markov and Faithfulness Conditions (the latter developed by philosophers) are of great importance to practitioners.
Philosophers typically like thinking for its own sake. But they have should at least try to do work that matters. Conceptual analysis typically does not matter, especially after the obvious options have been laid out and explored and the field has more or less reach a stalemate.
However, it is not a foregone conclusion that an axiomatic theory will matter either. In fact, I suspect that the Likelihood Principle is part of an axiomatic theory that doesn't matter. Hence my dissatisfaction with my project.
Specifically, the Likelihood Principle is part of an axiomatic theory of evidence, or rather, equivalence of evidential meaning. Allan Birnbaum initiated the development of this theory when he showed that the Likelihood Principle follows from the conjunction of the Sufficiency Principle and the Weak Conditionality Principle. I have attempted to improve it by showing that the Likelihood Principle follows from the conjunction of the Weak Evidential Equivalence Principle and the Experimental Conditionality Principle. I think this proof makes it harder for frequentists to deny that their methods fail to respect equivalence of evidential meaning. But it only follows that we ought not use frequentist methods on the assumption that we ought not use methods that fail to respect equivalence of evidential meaning. That assumption begs the question against Neyman's view of tests as rules of behavior, as given in the quote at the top of this post.
My view is that restrictions on methods are justified only insofar as they help us achieve our practical and epistemic goals. Respecting equivalence of evidential meaning is not for me a basic epistemic goal. nor does it seem to be related in any straightforward way to success in achieving my basic practical and epistemic goals (see my previous post). Thus, I at least seem to have no use for an axiomatic theory of evidential equivalence, and thus, it seems, no use for the Likelihood Principle. Bummer.