I've been writing about an idea called the Likelihood Principle. The Likelihood Principle says, roughly, that the evidential meaning of a given datum with respect to a set of hypotheses depends only on how probable that datum is according to those hypotheses.
The Likelihood Principle seems important because some statistical methods respect it while others do not. That is, if you take a set of data that are all evidentially equivalent according to the Likelihood Principle, some statistical methods would yield the same output given any datum in that set (assuming that prior probabilities and utilities are held fixed) while others would in some cases yield different outputs for different members of that set. Specifically, the kinds of statistical methods that are most common in science (usually called "classical" or "frequentist") do not respect the Likelihood Principle, while their main rivals (Bayesian methods) as well as some more obscure alternatives (primarily, Likelihoodist methods) do respect the Likelihood Principle.
In addition, there are arguments for the Likelihood Principle that are controversial within statistics but have received little attention from philosophers of science. These arguments are to a point at least independent of debates about the use of prior probabilities in science that have dominated Bayesian/frequentist debates in the past. Thus, thinking about the Likelihood Principle looks like a promising way to make progress in a debate that has become rather stale remains very important for statistics, and thus for science, and thus for the philosophy of science.
In my philosophy comprehensive paper, I gave a new proof of the Likelihood Principle that I still believe is a significant improvement on previous proofs such as the relatively well-known proof given by Allan Birnbaum in 1962 (article). I have since refined this paper and have had it accepted to present at the 2012 biennial meeting of the Philosophy of Science Association this November.
The project I have been working on for my prospectus would involve bolstering my proof of the Likelihood Principle in various ways and then considering its implications for statistics, science, and the philosophy of science. The problem I have been struggling with for a few weeks now is that I'm not sure that the Likelihood Principle does have any important implications.
What I've just written probably seems rather puzzling. If frequentist statistics violates the Likelihood Principle, and the Likelihood Principle is true, then doesn't it follow that we shouldn't use frequentist statistics?
Well, no. The Likelihood Principle as I prefer to formulate it says only that certain sets of experimental outcomes are evidentially equivalent. To derive from the claim that the Likelihood Principle is true the conclusion that we ought not use frequentist statistics, one needs to assume that we ought to use only statistical methods that always produce the same output given evidentially equivalent data as inputs. That assumption might seem innocuous, but it isn't, for at least two reasons:
1. It begs the question against frequentist views according to which the idea of an evidential relationship between data and hypothesis is misguided and we should think instead in terms of methods and their operating characteristics.
2. In practice, even subjective Bayesian statisticians violate the Likelihood Principle all the time (e.g., by using prior elicitation methods that involve estimating the parameters of a conjugate prior distribution. The conjugate prior family used depends on the sampling distribution, in violation of the Likelihood Principle.)
I'll discuss these points more in a subsequent post.
I get this way about my work all the time. My advice is: submit it and defend your prospectus! Then write your dissertation however you choose. At least you won't have to take a bunch of classes next term ;)
ReplyDeleteToo late! I already talked to my advisor about my worries, which he shares. Fortunately (?) I have to take several more classes anyway to finish my Master's in stats, so it doesn't make much of a difference.
ReplyDelete"It begs the question against frequentist views according to which the idea of an evidential relationship between data and hypothesis is misguided and we should think instead in terms of methods and their operating characteristics."
ReplyDeleteHm. There are such views. But I think they're very rare even among Frequentists. Most Frequentists, certainly including Fisher and Neyman (at least some of the time — I'm not sure how much they contradicted themselves) are all in favour of evidential relationships. Indeed, most of them (again including Fisher and Neyman at least some of the time) are even in favour of using evidential procedures. They just don't think you can find such procedures all the time ... and what makes them Frequentists is that they're happy using pre-data operating characteristics as an alternative.
If I'm right then even most Frequentists would be unhappy if you abandoned evidential standards.
By all means worry about the remaining few Frequentists who really think that evidence isn't important, if you want to. But don't imagine an army of Frequentist statisticians waiting for you to do that.
Thanks very much for the insightful comments, Jason. Is it crazy to think that something like a "behavioristic" frequentist position might be right, at least in some circumscribed domain, even though it's unpopular?
ReplyDelete