I've been writing about an idea called the Likelihood Principle. The Likelihood Principle says, roughly, that the evidential meaning of a given datum with respect to a set of hypotheses depends only on how probable that datum is according to those hypotheses.
The Likelihood Principle seems important because some statistical methods respect it while others do not. That is, if you take a set of data that are all evidentially equivalent according to the Likelihood Principle, some statistical methods would yield the same output given any datum in that set (assuming that prior probabilities and utilities are held fixed) while others would in some cases yield different outputs for different members of that set. Specifically, the kinds of statistical methods that are most common in science (usually called "classical" or "frequentist") do not respect the Likelihood Principle, while their main rivals (Bayesian methods) as well as some more obscure alternatives (primarily, Likelihoodist methods) do respect the Likelihood Principle.
In addition, there are arguments for the Likelihood Principle that are controversial within statistics but have received little attention from philosophers of science. These arguments are to a point at least independent of debates about the use of prior probabilities in science that have dominated Bayesian/frequentist debates in the past. Thus, thinking about the Likelihood Principle looks like a promising way to make progress in a debate that has become rather stale remains very important for statistics, and thus for science, and thus for the philosophy of science.
In my philosophy comprehensive paper, I gave a new proof of the Likelihood Principle that I still believe is a significant improvement on previous proofs such as the relatively well-known proof given by Allan Birnbaum in 1962 (article). I have since refined this paper and have had it accepted to present at the 2012 biennial meeting of the Philosophy of Science Association this November.
The project I have been working on for my prospectus would involve bolstering my proof of the Likelihood Principle in various ways and then considering its implications for statistics, science, and the philosophy of science. The problem I have been struggling with for a few weeks now is that I'm not sure that the Likelihood Principle does have any important implications.
What I've just written probably seems rather puzzling. If frequentist statistics violates the Likelihood Principle, and the Likelihood Principle is true, then doesn't it follow that we shouldn't use frequentist statistics?
Well, no. The Likelihood Principle as I prefer to formulate it says only that certain sets of experimental outcomes are evidentially equivalent. To derive from the claim that the Likelihood Principle is true the conclusion that we ought not use frequentist statistics, one needs to assume that we ought to use only statistical methods that always produce the same output given evidentially equivalent data as inputs. That assumption might seem innocuous, but it isn't, for at least two reasons:
1. It begs the question against frequentist views according to which the idea of an evidential relationship between data and hypothesis is misguided and we should think instead in terms of methods and their operating characteristics.
2. In practice, even subjective Bayesian statisticians violate the Likelihood Principle all the time (e.g., by using prior elicitation methods that involve estimating the parameters of a conjugate prior distribution. The conjugate prior family used depends on the sampling distribution, in violation of the Likelihood Principle.)
I'll discuss these points more in a subsequent post.