I was talking to a friend (the same friend who inspired the two previous posts), who was talking to a friend of ours about a thing, and there’s a context but it doesn’t matter to what I want to write here.
Suppose there is some quantity, I’ll call it , that I don’t know. Now, some people have estimated it, and given me point estimates , , etc, so I have a vector of estimates.
One possible way to get a posterior estimate for what the true value is, in a sort of Bayesian Model Averaging way, is by having a vector of confidences in each of those estimates and then having that , which is a weighted average of the estimates. However, in the context of the question, the standard practice seems to be using the median instead of the average, because then we get rid of outliers.
This seems, at first glance, unjustified. Surely there’s some way to use the estimates themselves to determine that an estimate is problematic? Well, suppose are the four different estimates. When you look at that, it seems obvious that the fourth person screwed up. Yet, how can you really tell, if all the information about you have to go on are those estimates? What if the first three are the wrong ones? The median may be theoretically inadequate, but how do you reconcile that with the fact that it doesn’t let weird estimates screw your expected value up too much? Even if, a priori, you trusted each of those four banks equally, after seeing that last value it seems very likely that it’s horribly wrong.
My friend suggested a couple of ways of dealing with that, measures that’d be between the average and the median. One suggestion was taking an average between the average and the median of the estimates. The other was a bit more complicated, and it went thusly:
Let equal the number of standard deviations is above or below the average . Then he suggests that the components of the vector should be given by:
This actually produces interesting results, corresponding to intuition! And I want to thwack him upside his frequentist head for even thinking of creating an operational tool like the ones he did before trying to derive stuff from first principles, but he ended the email he sent me with the questions, “What do you think? Are there established methods to update your beliefs in each of the models of a set, conditional on the predictions of all of them?” so at least his second instinct of trying to figure out if there exists another way is good (when you read this, I still love you ♥).
Let’s first suppose each estimate was generated by a normal distribution: (where the parametre is known as the precision of the distribution). We’ll call this model (Gaussian model of precision vector ) and condition on it in our distributions. The likelihood function of the vector of observations is:
(I’m not conditioning on the symbol for our prior information for ease of notation, but let’s not forget it’s always there.)
The Maximum Likelihood Estimate for , i.e. the value of that makes the above function maximal, is given by (where the prime is to indicate that this is a weighted average and not the regular average). Now, a Bayesian needs to always take the prior probability for stuff into account. What we really want is:
Where we used the fact that is a hypothesis about the form of the likelihood of the vector and is therefore independent of .
A Bayesian, however, reduces to a frequentist once their prior knowledge becomes effectively zero. The conjugate prior for the mean of a normal distribution is itself normal, , but in the limit of zero precision, is an improper constant prior over the reals (I’m coming around to Jaynes’ view that improper-but-result-of-well-defined-limit priors are ok). If we use this ignorance prior, our Maximum A Posteriori value for becomes as well.
So, in this case, our hypothesis gives us our confidence vector and we have .
Alright, so, generates exactly the behaviour we were trying to escape from, that just averaging based on our confidence in each result would screw us over with outliers, which is a known problem of Gaussians because of their light tails. But what if, instead of having point confidences in each estimate, we had a distribution for our confidence, that we updated upon seeing the estimates?
The conjugate prior for the precision of a normal distribution is something called a Gamma distribution:
where is the Gamma Function, and thus it’s normalisable for and is finite everywhere for . If the precision is distributed according to that, this is equivalent to having observed “virtual” or “effective” prior data points with sample precision .
We can encode our prior confidence in each estimate with this function, using the hypothesis , and then the likelihood function for each estimate is:
That thing inside the integral is known as the Normal-Gamma distribution, frequently used in Bayesian statistics when both the mean and the precision of a Gaussian are unknown. Evaluating the integral gives us what’s known as the Student’s t-distribution:
(Normally the t-distribution is presented with and , but this alternative parametrisation is more general and the result of our derivation.)
In the case of our Normal-Gamma distribution, , so:
The Student’s t-distribution, then, can be seen as a sum of an infinity of Gaussians with each possible precision, where the precisions are weighted by a Gamma distribution. Taking the limit of while keeping constant turns the t-distribution into a Gaussian with precision , reducing us to the case we just discussed, which intuitively makes sense: if we see an infinity of samples with a given precision, we will believe with infinite confidence that the precision of our Gaussian is exactly that.
How does sequential learning happen with a t-distribution, then?
The Student’s t is the inverse of a polynomial (where the powers aren’t necessarily integers), so the likelihood function might have several local maxima. This puts us in a bit of a pickle. However, it does appear clear that we’re no longer in a spot of just multiplying a confidence vector by the estimates and getting the weighted average directly. Stuff’s more… complicated, now.
There exist, however, methods of finding the MLE using EM algorithms. Google gives me a few interesting results and hopefully when I get to Gelman v2b I’ll have more tools to deal with it. The algorithm itself seems straightforward when , the case where our prior confidence in each estimate is the same. And the literature seems to agree that using the Student’s t is much more robust to outliers than using straight-up Gaussians, so yay!
(Maybe we shouldn’t use the MLE but rather the expected value instead? But in the end that just makes the numerator of the thing a linear function of , so it’s not like the technique will change much.)
It’s good to remember what we’re talking about, here. The estimates aren’t really samples of random processes in the frequentist sense, or not necessarily. In the context of the conversation that sparked this post, they definitely aren’t. We’re using the distributions here just to describe our states-of-knowledge, and they encode assumptions about the data.
One such assumption, for example, is unimodality. Neither normal nor Student’s t-distributions can capture multimodal distributions. It’s why I’m vehement that, even if we don’t include the in our notation to indicate our global prior knowledge, we do include to indicate that we’re assuming the Student’s t model with the above considerations.
My friend wanted a way to operationalise that. I hope this theoretical discussion helps him, I’m not yet good enough with statistical tools and languages on a practical level to do that for him. If the above is not enough for him, I’m sure he’ll tell me so.
(Though when we’re maximising a function like the Student’s t, the median doesn’t sound too awful. And if we use a Laplace distribution instead, the median is in fact theoretically sound. I just haven’t yet tried to think about what qualitative assumptions it hides.)